text
stringlengths
2.05k
2.05k
9] Global mobile Suppliers Association (GSA), The Future of IMT in the 3300-4200 MHz Frequency Range, June 2017. [30] GSM Association, 5G Spectrum, Public Policy Position, November 2016. [31] 5G Service-Guaranteed Network Slicing White Paper, China Mobile Communications Corporation, Huawei Technologies Co., Ltd., Deutsche Telekom AG, Volkswagen, February 2017. [32] Mobile and Wireless Communications Enablers for the Twenty-Twenty Information Society-II (METIS II): Deliverable D1.1, Refined Scenarios and Requirements, Consolidated Use Cases, and Qualitative Techno-Economic Feasibility Assessment, January 2016. [33] Mobile and Wireless Communications Enablers for the Twenty-Twenty Information Society (METIS), Deliverable D6.1, Simulation Guidelines, October 2013. [34] K. Mallinson, The path to 5G: as much evolution as revolution, 3GPP. <http://www.3gpp.org/news- events/3gpp-news/1774-5g_wiseharbour>, May 2016. 5G Forum, Republic of Korea, 5G vision, requirements, and enabling technologies (V.1.0). <https:// www.5gforum.org>, March 2015. [36] IEEE Std 802.15.7-2011 - IEEE Standard for Local and Metropolitan Area Networks-Part 15.7: Short- Range Wireless Optical Communication Using Visible Light, September 2011. Evolving LTE to fit the 5G future, Ericsson Technology Review, January 2017. [38] Wikipedia, Information-centric networking. <https://en.wikipedia.org/wiki/Information- centric_networking> [39] 5G-PPP Use Cases and Performance Evaluation Models. <http://www.5g-ppp.eu/>, April 2016. [40] NTT DoCoMo Technical Journal, ITU Radio-Communication Assembly 2015 (RA-15) Report - Future Mobile Phone Technologies Standardization - 2015 ITU World Radio-communication Conference (WRC-15) Report - Standardization of Mobile Phone Spectrum Vol. 18 No. 1, July 2016. [41] NMC Consulting Group (NETMANIAS), Timeline of 5G Standardization in ITU-R and 3GPP, January 2017. [42] NMC Consulting Group (NETMANIAS), Network Architecture Evolution from 4G to 5G, December 2015. [43] NMC Consulting Group (NETMANIAS), E2E Network Slicing - K
ey 5G Technology: What Is It? Why Do We Need It? How Do We Implement It?, November 2015. [44] Ixia, Test Considerations for 5G, White Paper, July 2017. [45] 5G Reimagined: A North American Perspective, Alliance for Telecommunications Industry Solutions (ATIS), February 2017. [46] 5G Americas, White Paper on 5G Spectrum Recommendations, April 2017. [47] Qualcomm Technologies, Inc. LTE-U/LAA, MuLTEfire™ and Wi-Fi; Making Best Use of Unlicensed Spectrum, September 2015. [48] Skyworks Solutions, Inc., White Paper, 5G in Perspective: a Pragmatic Guide to What's Next, March 2017. CHAPTER 1 5G Network Architecture 5G network architecture has been designed to support fast and reliable connectivity as well as diverse applications and services, enabling flexible deployments using new con- cepts, such as network function virtualization (NFV), software-defined networking (SDN), and network slicing. The 5G system supports a service-oriented architecture with modu- larized network services. The service-oriented 5G core (5GC) network is built on the prin- ciple that 5G systems must support wide range of services with different characteristics and performance requirements. The service-oriented architecture and interfaces in the 5G systems make the future networks flexible, customizable, and scalable. Network service providers can leverage service-oriented architecture design in 5G to manage and adapt the network capabilities, for example, by dynamically discovering, adding, and updating net- work services while preserving the performance and backward compatibility. The main difference in the service-based architecture is in the control plane where, instead of predefined interfaces between entities, a service model is used in which components request a new network entity to discover and communicate with other entities over applica- tion programming interfaces (APIs). This notion is closer to the cloud networking concept and more attractive to the operators that demand flexibility and adaptability in their net- works. The challeng
e is that it is harder to implement using today's cloud platforms and it is likely to be part of future deployments. It is also important for the 5G network to enable each network function (NF) to directly interact with other network functions. The architec- ture design does not preclude the use of an intermediate function to help route control-plane messages. It is also desirable to minimize dependencies between the access network (AN) and the core network. There are some new concepts that have fundamentally changed the framework of 5G net- works and made them differentiated from the previous generations. 5G network architecture leverages structural separation of hardware and software, as well as the programmability offered by software-defined network (SDN) and network function virtualization (NFV). As such, 5G network architecture is a native SDN/NFV architecture covering mobile/fixed terminals, infrastructure, NFs, enabling new capabilities and management and orchestration (MANO) functions. One of the most innovative concepts that has been incorporated into the design of the next-generation networks is the separation of user-plane and control-plane 5G NR. DOI: https://doi.org/10.1016/B978-0-08-102267-2.00001-4 © 2019 Elsevier Inc. All rights reserved. 2 Chapter 1 functions, which allows individually scalable and flexible centralized or distributed network deployments. This concept forms the basis of the SDN. Other schemes include modularized functional design, which enables flexible and efficient network slicing. Having such require- ments in mind, third-generation partnership project (3GPP) has developed a flat architecture where the control-plane functions are separated from the user plane in order to allow them to scale independently. Another central idea in the design of 5G networks has been to mini- mize dependencies between the AN and the core network with a unified access-agnostic core network and a common AN/core network interface which integrates different 3GPP and non-3GPP access types. In order to f
urther support multi-radio access, the network architecture was required to pro- vision a unified authentication framework. The support of stateless NFs, where the compute resource elements are decoupled from the storage resource elements, is intended to create a disaggregated architecture. To support low-latency services and access to local data net- works, the user-plane functions (UPFs) can be deployed close to the edge of the AN which further requires support of capability exposure and concurrent access to local and central- ized services. The combination of SDN and functional virtualization enables dynamic, flexible deployment, and on-demand scaling of NFs, which are necessary for the development of 5G mobile packet core networks. 5G network design requires a common core net- work associated with one or more ANs to be part of a network slice (e.g., fixed and mobile access within the same network slice). A network slice may include control- plane functions associated with one or more UPFs, and/or service or service-category- specific control-plane and user-plane functional pairs (e.g., user-specific multimedia application session). A device may connect to more than one slice. When a device accesses multiple network slices simultaneously, a control-plane function or a set of control-plane functions should be in common and shared among multiple network slices, and their associated resources. In order to enable different data services and requirements, the elements of the 5GC, also called NFs, have been further simplified with most of them being software-based SO that they can be adapted and scaled on a need basis. Today's static measurements of network and application performance are neither extensi- ble to the dynamic nature of SDN/NFV-based 5G networks and functionalities, nor capa- ble of creating any form of automation to create self-adapting behavior. The pace at which these environments change requires sophisticated analysis of real-time measure- ments, telemetry data, flow-based information, etc., in com
bination with user profile and behavior statistics. Creating a dynamic model to analyze the resulting big data requires artificial intelligence/machine learning techniques that will pave the way for migration 5G Network Architecture 3 from today's process-based analytics toward predictive, descriptive, and ultimately cogni- tive analytics required for self-organizing, self-optimizing, and self-healing networks. 3GPP has been working on the standardization of 5G access and core networks since 2015 with a goal of large-scale commercialization in 2020 + 3GPP system architecture group finalized the first study items in December 2016 and published the 3GPP TR 23.799 specification as an outcome of the study. The normative specifications of the 5G network architecture and services have been published in numerous 3GPP standards docu- ments [6]. In this chapter we discuss 5G network architecture design principles from 3GPP perspec- tive and the innovative solutions that have formed the foundation of the 5G networks. We will further study the access/core network entities, interfaces, and protocols as well as the quality of service (QoS), security, mobility, and power management in 5G networks. 1.1 Design Principles and Prominent Network Topologies The 5G system supports a service-based architecture and interfaces with modularized network services, enabling flexible, customizable, and independently deployable net- works. Network service providers can leverage 5G service-oriented architecture to man- age and customize the network capabilities by dynamically discovering, adding, and updating network services while preserving performance and compatibility with the existing deployments. 5GC and ANs were required to be functionally decoupled to cre- ate a radio technology agnostic architecture in order to realize the 5G performance tar- gets for different usage scenarios. As an example, reduction in network transport latency requires placement of computing and storage resources at the edge of the network to enhance service quali
ty and user experience. The tactile Internet is a forward-looking usage scenario under the category of ultra-reliable low-latency communication (URLLC) services. A notable requirement for enabling the tactile Internet is to place the content and context-bearing virtualized infrastructure at the edge of the AN [mobile edge com- puting (MEC)]. This relocation provides a path for new business opportunities and col- laborative models across various service platforms. Improved access to the content, context, and mobility are vital elements to address the demands for reliability, availabil- ity, and low latency [62]. While much has been written and speculated about the next-generation radio standards that are going to form the basis of the forthcoming 5G systems, the core network is also an essential piece in achieving the goals set forth for these systems and in helping to ensure the competitiveness and relevance of network operations in the future. With the advent of heterogeneous ANs, that is, deployment of different radio access technologies with different 4 Chapter 1 coverage footprints, seamless connectivity, and service continuity can be provided in vari- ous mobility classes. The availability of different types of footprints for a given type of wireless access technology characterizes heterogeneous access, for example, a radio network access node such as a base station with large to small coverage area is referred to as macro-, pico-, and femtocell, respectively. A combination of these types of base stations offers the potential to optimize both coverage and capacity by appropriately distributing smaller-size base stations within a larger macro-base station coverage area. Since the radio access tech- nology is common across these different types of base stations, common methods for con- figuration and operation can be utilized, thereby enhancing integration and operational efficiencies. The diversity of coverage, harnessing of spectrum (e.g., licensed and unli- censed spectrum), and different transmission power
levels based on coverage area provide strategies for optimizing the allocation and efficient utilization of radio resources. The expanding diversity of deployment options while considering the ultra-low latency, high reliability, availability, and mobility requirements demands a significant reduction in the overhead and complexity associated with the frequent setup and teardown of the access/ core network bearers and tunneling protocols. However, the changes in the geographical location of point of attachment of a device to the AN, as a result of mobility, would inevita- bly add more overhead with tunneling, in a functionally virtualized network, which could adversely affect the delay-sensitive service experience. The 5G networks support multiple radio access mechanisms including fixed and mobile access, making fixed/mobile conver- gence an important consideration. 5G systems further support the use of non-3GPP access for off-loading and maintaining service continuity. The logical/physical decomposition of radio NFs is required to meet the diverse information transport demands and to align them with the requirements of next-generation services in various use cases. The decomposition of the radio network protocols/functions across layer-1, layer-2, and layer-3 would depend on the degree of centralization or distribution required. It includes placing more functions corresponding to the upper layers of the radio network protocol stack in the distributed entities when high-performance transport (e.g., high bandwidth, high capacity, low latency, low jitter, etc.) is available. Optimized schedul- ing at a centralized entity is critical for high-performance transport across multiple distrib- uted entities (e.g., base stations, remote radio heads (RRHs), 1 etc.). For relatively low- performance transport options, more functions corresponding to the upper layer of the radio network protocol stack is moved to the central entity to optimize the cost/performance metrics, associated with the distributed entities. The choice
of functional split will determine the fronthaul/backhaul capacity requirements and associated latency specifications and The terms "remote radio head" abbreviated as RRH and "remote radio unit" abbreviated as RRU have the same meaning and will be used interchangeably in this book. 5G Network Architecture 5 performance. This will impact the network architecture planning, since it determines the placement of nodes and permissible distance between them. The core network in the 5G systems allows a user to access network services, indepen- dent of the type of access technology. The network service provider utilizes a common framework for authentication and billing via a unified customer database to authorize the access to a service independent of the type of access. The 5G system provides termi- nation points or points of attachment to the core network for control-plane and user- plane entities. These points are selected based on location, mobility, and service require- ments. They may dynamically change during the lifetime of a service flow, based on the aforementioned requirements. To achieve a unified core network, common mechanisms of attachment are supported for both 3GPP and non-3GPP ANs. The 5G system will allow simultaneous multiple points of attachment to be selected per device on a per-se- rvice flow basis. Control-plane functions and UPFs are clearly separated with appropri- ate open interfaces. Device types are characterized by a variety of attributes including three broad categories of interfaces, namely human-human (H-H), human-machine (H-M), and machi- ne-machine (M-M). Examples of devices that belong to these categories include smart- phones (H-H), robots (H-M) or (M-M), drones (H-M) or (M-M), wearable devices (H-M), smart objects and sensors (M-M), etc. The attributes and capabilities associated with these devices are varying such as high power/low power, energy constraint/non-energy constraint, high cost/low cost, high performance/low performance, delay sensitive/delay tol- erant, high reliability,
and precision sensitive. These devices are distinguished in terms of diverse media types, such as audio, visual, haptic, vestibular, etc. The devices may be con- nected to a network either via a wired connection (e.g., Ethernet or optical transport) or a wireless connection (e.g., cellular, Wi-Fi, or Bluetooth). The cloud radio access model includes both composite and heterogeneous types of access, where moving the computa- tional complexity and storage from the device to the edge of the network would enable diverse services using a variety of device types (e.g., H-H, H-M, and M-M) and would enable energy conservation in the devices with limited computing/storage resources. Flexibility applies not only to network hardware and software but also to network manage- ment. An example would be the automation of network instance setup in the context of net- work slicing that relies on optimization of different NFs to deliver a specific service satisfying certain service requirements. Flexible management will enable future networks to support new types of service offerings that previously would have made no technical or eco- nomic sense. Many aspects of the 5G network architecture need to be flexible to allow ser- vices to scale. It is likely that networks will need to be deployed using different hardware technologies with different feature sets implemented at different physical locations in the 6 Chapter 1 network depending on the use case. In some use cases, the majority of user-plane traffic may require only very simple processing, which can be run on low-cost hardware, whereas other traffic may require more advanced/complex processing. Cost-efficient scaling of the user plane to handle the increasing individual and aggregated bandwidths is a key compo- nent of a 5GC network. As we mentioned earlier, supporting the separation of the control-plane and user-plane functions is one of the most significant principles of the 5GC network architecture. The separation allows control- and user-plane resources to scale independe
ntly and supports migration to cloud-based deployments. By separating user- and control-plane resources, the user-plane/control-plane entities may also be implemented/instantiated in different logi- cal/physical locations. For example, the control plane can be implemented in a central site, which makes management and operation less complex and the user plane can be dis- tributed over a number of local sites, moving it closer to the user. This is beneficial, as it shortens the round-trip time between the user and the network service, and reduces the amount of bandwidth required between sites. Content caching is an example of how locat- ing functions on a local site reduces the required bandwidth between sites. Separation of the control and user planes is a fundamental concept of SDN, as the flexibility of 5GC networks will improve significantly by adopting SDN principles. User-plane protocols, which can be seen as a chain of functions, can be deployed to suit a specific use case. Given that the connectivity needs of each use case varies, the most cost-efficient deploy- ment can be uniquely created for each scenario. For example, the connectivity needs for a massive machine type communication (mMTC) service characterized by small payload and low mobility are quite different from the needs of an enhanced mobile broadband (eMBB) service with large payload and high mobility characteristics. An eMBB service can be broken down into several subservices, such as video streaming and web browsing, which can in turn be implemented by separate functional chains within the network slice. Such additional decomposition within the user-plane domain further increases the flexibil- ity of the core network. The separation of the control and user planes enables the use of different processing platforms for each one. Similarly, different user planes can be deployed with different run-time platforms within a user plane, all depending on the cost efficiency of the solution. In the eMBB use case, one functional chain of services may run
on general-purpose processors, whereas the service that requires simple user-data proces- sing can be processed on low-cost hardware platforms. It is obvious that enabling future expansions requires greater flexibility in the way that the networks are built. While net- work slicing is a key enabler to achieving greater flexibility, increasing flexibility may lead to greater complexity at all levels of the system, which in turn tends to increase the cost of operation and delay the deployments. 5G Network Architecture 7 Traditional radio access network (RAN) architectures consist of several stand-alone base stations, each covering relatively a small area. Each base station processes and trans- mits/receives its own signal to/from the mobile terminals in its coverage area and for- wards the user data payloads from the mobile terminal to the core network via a dedicated backhaul link. Owing to the limited spectral resources, network operators reuse the frequency among different base stations, which can cause interference among neigh- boring cells. There are several limitations in the traditional cellular architecture, including the cost and the operation and maintenance of the individual base stations; increased inter-cell interfer- ence level due to proximity of the other base stations used to increase network capacity; and variation of the amount of loading and user traffic across different base stations. As a result, the average utilization rate of individual base stations is very low since the radio resources cannot be shared among different base stations. Therefore, all base stations are designed to handle the maximum traffic and not the average traffic, resulting in over- provisioning of radio resources and increasing power consumption at idle times. In earlier generations of cellular networks, the macro-base stations used an all-in-one archi- tecture, that is, analog circuitry and digital processing hardware were physically co-located. The radio frequency (RF) signal generated by the base station transported o
The cloud-RAN (C-RAN) may be viewed as an architectural evolution of the distributed base station system (Fig. 1.1). It takes advantage of many technological advances in wire- less and optical communication systems. For example, it uses the latest CPRI specifications, Common Public Radio Interface, http://www.cpri.info/. 4G systems 5G systems Distributed RAN Cloud RAN SDN, NFV, MEC Network slicing Center node with network network dedicated Applications Applications Dedicated Applications Central cloud (NFV/) Applications equipment 5G core CP 5G core CP 5G core CP equipment Distributed VM), commercial 4G core 4G core 5G core UP 5G core CP 5G core UP 5G core UP core (CP) 5G core UP servers and virtualization Backhaul Distributed app server Backhaul Backhaul Backhaul Backhaul Backhaul Distributed core (UP) Central Central Dedicated Cache office office equipment Network connectivity 5G core UP 5G core UP 5G core UP Redefinition of Baseband unit CU/DU functions Central unit among VMs, SDN Central unit Central unit Central unit Central unit Baseband Fronthaul Functional split (PHY) Fronthaul Fronthaul Fronthaul Fronthaul Fronthaul Cell site/eNB Baseband unit with dedicated H Edge cloud (NFV/VM), equipment Radio unit/ commercial server and virtualization Radio High data Voice Massive loT Mission-critical loT rate services services services services Figure 1.1 Network architecture evolution from 4G to 5G 60]. . 5G Network Architecture low-cost coarse/dense wavelength division multiplexing (CWDM/DWDM) 3 technology, and mmWave to allow transmission of baseband signals over long distance, thus achieving large-scale centralized base station deployment. It applies recent data center network tech- nology to allow a low cost, high reliability, low latency, and high bandwidth interconnect network in the BBU pool. In the run up to 5G networks, the C-RAN utilizes open platforms and real-time virtualization technology rooted in cloud computing to achieve dynamic shared resource allocation and support of multi-vendor, multi-technolo
gy environments. Fig. 1.1 illustrates the evolution stages of 4G to 5G networks, where the distributed archi- tectures evolved to centralized architectures, NFs have been virtualized, and later the Wavelength division multiplexing allows different data streams to be sent simultaneously over a single optical fiber network. There are two main types of wavelength division multiplexing technologies in use: Coarse wavelength division multiplexing and dense wavelength division multiplexing. Coarse wavelength division multiplexing allows up to 18 channels to be transported over a single dark fiber, while dense wavelength divi- sion multiplexing supports up to 88 channels. Both technologies are independent of transport protocol. The main difference between coarse wavelength division multiplexing and dense wavelength division multiplexing technologies lies in how the transmission channels are spaced along the electromagnetic spectrum. Wavelength division multiplexing technology uses infrared light, which lies beyond the spectrum of visible light. It can use wavelengths between 1260 and 1670 nm. Most fibers are optimized for the two regions 1310 and 1550 nm, which allow effective channels for optical networking. Coarse wavelength division multiplexing is a convenient and low-cost solution for distances up to 70 km. But between 40 and its maximum distance of 70 km, coarse wavelength division multiplexing tends to be limited to eight channels due to a phenomenon called the water peak of the fiber. Coarse wavelength division multiplexing signals cannot be amplified, mak- ing the 70 km estimate an absolute maximum. Dense wavelength division multiplexing works on the same principle as coarse wavelength division multiplexing, but in addition to the increased channel capacity, it can also be amplified to support much longer distances. The following figure shows how the dense wavelength division multiplexing channels fit into the wavelength spectrum compared to coarse wavelength division mul- tiplexing channels. Each coarse wavele
ngth division multiplexing channel is spaced 20 nm apart from the adjacent channel (www.Smartoptics.com). DWDM channels 0.8 nm spaced apart 1310 region 1550 region CWDM channels (20 nm apart) 10 Chapter 1 control-plane and user-plane functions were separated, and ultimately network slicing and edge computing have been introduced to further advance the network architectures toward flexibly supporting various 5G use cases and applications. Having the above principles and requirements in mind, 3GPP 5G system (5GS) architecture has been designed to support data connectivity and new services by enabling deployments to use SDN/NFV methods. The 5GS architecture leverages service-based interactions between control-plane NFs and supports separation of user-plane functions from control-plane functions. The modularized functional design would allow flexible and efficient network slicing. It defines procedures, that is, set of interactions between NFs, as services SO that their reuse is possible. It enables each NF to directly interact with other NFs. The 5GS design minimizes dependencies between the access and the core networks. It further supports a unified authenti- cation framework, stateless NFs, where compute resources are decoupled from storage resources, and capability exposure, as well as concurrent access to local and centralized ser- vices. The 5GS supports roaming with both home-routed traffic as well as local breakout traffic in the visited network. In 5GS, the interactions between NFs are represented either through a service-based representation, where NFs within the control plane enable other authorized NFs to access their services which may include point-to-point reference points; or a reference-point representation, where the interactions between the NFs are described by point-to-point refer- ence points between any two NFs. The NFs within the 5GC network control plane use service- based interfaces for their interactions [3]. The general principles that guided the definition of 3GPP 5G radio access network (
NG-RAN) and 3GPP 5GC network architecture and network interfaces are based on logical separation of signaling and data transport networks, as well as separation of NG-RAN and 5GC functions from the transport functions. As a result, the addressing schemes used in NG-RAN and 5GC are decoupled from the addressing schemes of the transport functions. The protocols over the air interface and the NG interfaces are divided into user-plane proto- cols, which are the protocols implementing the actual protocol data unit (PDU) session ser- vice, carrying user data through the access stratum (AS); and control-plane protocols, which are the protocols for controlling the PDU sessions and the connection between the user equipment (UE) and the network from different aspects, including requesting the service, controlling different transmission resources, handover, etc. [15]. 1.1.1 Network and Service Requirements The service-centric 5G network architecture has been designed to flexibly and efficiently meet diverse requirements of the emerging applications/services. With SDN and NFV sup- porting the underlying physical infrastructure, 5G network systematically centralizes access, transport, and core network components. Migration to cloud-based architectures is meant to support wide-ranging 5G services and enables key technologies, such as network slicing, on-demand deployment of service anchors, and component-based NFs. 5G Network Architecture 11 The design principles of 3GPP next-generation system architecture have deviated from those of the long-term evolution (LTE) evolved packet core (EPC) network in order to address the challenging requirements of 5G applications/services. While the design of 5G network started from a clean slate, the requirements for support of the new radio access technologies (RATs) as well as the evolved LTE, legacy systems, and non-3GPP radio access have caused the new design to borrow a large amount of concepts from the predecessor networks. Some of the key tenets of 5G network design include the require
- ment for logically independent network slicing based on a single network (physical) infrastructure to meet the 5G service requirements; and to provide dual-connectivity- based cloud architecture to support various deployment scenarios. The network design further relies on C-RAN architecture to deploy different radio access technologies in order to provide multi-standard connectivity and to implement on-demand deployment of RAN functions required by 5G services. It simplifies core network architecture to imple- ment on-demand configuration of NFs through control and user-plane separation, component-based functions, and unified database management. It further implements automatic network slicing service generation, maintenance, and termination for various services to reduce operating expenses through agile network operation and management. 3GPP TS 22.261 specification, service requirements for the 5G system, contains perfor- mance targets and basic capabilities prescribed for the 5G networks. Among those require- ments, one can distinguish support for fixed, mobile, wireless, and satellite access technologies; scalable and customizable network that can be tailored to serve multiple ser- vices and vertical markets (e.g., network slicing, NFV); resource efficiency for services ranging from low-rate Internet of things (IoT) services to high-bandwidth multimedia ser- vices; energy efficiency and network power optimization; network capability exposure to allow third party Internet service providers and Internet content providers to manage net- work slices, and deploy applications in the operator's service hosting environment; indirect connectivity from a remote UE via a relay UE to the network; and service continuity between indirect connections and direct connections. 3GPP TS22.261 defines performance targets for different scenarios (e.g., urban macro, rural macro, and indoor hotspot) and appli- cations (e.g., remote control, monitoring, intelligent transport systems, and tactile communications). The general requirem
ents that led the design of NG-RAN architecture and interfaces included logical separation of signaling and data transport networks and separation of access and core NFs from transport functions, regardless of their possible physical co-location. Other considerations included independence of addressing scheme used in NG-RAN and 5GC from those of transport functions and control of mobility for radio resource control (RRC) connection via NG-RAN. The functional division across the NG-RAN interfaces has limited options and the interfaces are based on a logical model of the entity controlled through the corresponding interface. As was the case with LTE, one physical network ele- ment can implement multiple logical nodes. 12 Chapter 1 1.1.2 Virtualization of Network Functions NFV is an alternative approach to design, deploy, and manage networking services as well as a complement to SDN for network management. While they both manage networks, they rely on different methods. SDN separates the control and forwarding planes to offer a cen- tralized view of the network, whereas NFV primarily focuses on optimizing the network services themselves. NFV transforms the way that network operators architect networks by evolving standard server-based virtualization technology to consolidate various network equipment types into industrial-grade high-volume servers, switches, 4 and storage, which could be located in data centers, network nodes, and in the end-user premises. The NFV involves implementation of NFs in software that can run on a range of network server hardware that can be moved to or instantiated in various locations in the network as required, without the need for installa- tion of new equipment. The NFV is complementary to SDN, but not dependent on it or vice versa. It can be implemented without an SDN being required, although the two concepts/ solutions can be combined to gain potentially greater value. NFV goals can be achieved using non-SDN mechanisms, relying on the techniques currently in use in many data cen- te
rs. But approaches relying on the separation of the control and data forwarding planes as proposed by SDN can enhance performance, simplify compatibility with the existing deployments, and facilitate operation and maintenance procedures. The NFV is able to sup- port SDN by providing the infrastructure upon which the SDN software can be run. Furthermore, NFV aligns closely with the SDN objectives to use commodity servers and switches. The latter is applicable to any user-plane or control-plane functional processing in mobile and fixed networks. Some example application areas include switching elements, mobile core network nodes, functions contained in home routers and set top boxes, tunnel- ing gateway elements, traffic analysis, test and diagnostics, Internet protocol (IP) multime- dia subsystem, authentication, authorization, and accounting (AAA) servers, 5 policy control and charging platforms, and security functions [48]. Switch is a device that typically transports traffic between segments of a single local area network. Internal firmware instructs the switch where to forward each packet it receives. Typically, a switch uses the same path for every packet. In a software-defined networking environment, the switches' firmware that dictates the path of packets would be removed from the device and moved to the controller, which would orchestrate the path based on a macro-view of real-time traffic patterns and requirements. Authentication, authorization, and accounting is a framework for intelligently controlling access to computer resources, enforcing policies, auditing usage, and providing the information necessary to bill for services. These combined processes are considered important for effective network management and security. authentication, authorization, and accounting server is a server program that handles user requests for access to computer resources and, for an enterprise, provides authentication, authorization, and accounting services. The authentication, authorization, and accounting server typica
lly interacts with network access and gateway servers and with databases and directories containing user information. 5G Network Architecture NFV leverages modern technologies such as those developed for cloud computing. At the core of these cloud technologies are virtualization mechanisms. Hardware virtualization is realized by means of hypervisors6 as well as the usage of virtual Ethernet switches (e.g., vSwitch7) for connecting traffic between virtual machines (VMs) and physical interfaces. For communication-oriented functions, high-performance packet processing is made possi- ble through high-speed multi-core CPUs with high I/O bandwidth, smart network interface cards for load sharing and transmission control protocol (TCP) offloading, routing packets directly to VM memory, and poll-mode Ethernet drivers (rather than interrupt driven; e.g., Data Plane Development Kit 8. Cloud infrastructures provide methods to enhance resource availability and usage by means of orchestration and management mechanisms, which is applicable to the automatic instantiation of virtual appliances in the network, management of resources by properly assigning virtual appliances to the CPU cores, memory and inter- faces, reinitialization of failed VMs, 9 snapshot of VM states, and the migration of VMs. As shown in Fig. 1.2, containers and VMs are two ways to deploy multiple, isolated services on a single platform [42-44]. The decision whether to use containers or VMs depends on the objective. Virtualization enables workloads to run in environments that are separated from their underlying hardware by a layer of abstraction. This abstraction allows servers to be divided into virtualized machines that can run different operating systems. Container technology offers an alterna- tive method for virtualization, in which a single operating system on a host can run many A hypervisor is a function which abstracts or isolates the operating systems and applications from the underly- ing computer hardware. This abstraction allows the underlying ho
st machine hardware to independently oper- ate one or more virtual machines as guests, allowing multiple guest virtual machines to effectively share the system's physical compute resources, such as processor cycles, memory space, and network bandwidth. A virtual switch is a software program that allows one virtual machine to communicate with another. Similar to its counterpart, the physical Ethernet switch, a virtual switch does more than just forwarding data packets. It can intelligently direct communication on the network by inspecting packets before forwarding them. Some vendors embed virtual switches into their virtualization software, but a virtual switch can also be included in a server's hardware as part of its firmware. Open vSwitch is an open-source implementation of a distributed virtual multilayer switch. The main purpose of Open vSwitch is to provide a switching stack for hardware vir- tualization environments, while supporting multiple protocols and standards used in computer networks. Data plane development kit is a set of data-plane libraries and network interface controller drivers for fast packet processing from Intel Corporation. The data plane development kit provides a programming frame- work for x86-based servers and enables faster development of high-speed data packet networking applica- tions. The data plane development kit framework creates a set of libraries for specific hardware/software environments through the creation of an environment abstraction layer. The environment abstraction layer conceals the environmental-specific parameters and provides a standard programming interface to libraries, available hardware accelerators, and other hardware and operating system (Linux, FreeBSD) elements. A virtual machine is an operating system or application environment that is installed on software, which imi- tates dedicated hardware. The end user has the same experience on a virtual machine as they would have on dedicated hardware. 14 Chapter 1 Bin/Libs Container Container Bare metal Guest os H
ypervisor Bin/Libs Bin/Libs Bin/Libs Host operating system (OS) Host os Hardware Hardware Figure 1.2 NFV software/hardware architecture models. different applications from the cloud. One way to think of containers versus VMs is that VMs run several different operating systems on one compute node, whereas container tech- nology offers the opportunity to virtualize the operating system itself. A VM is a software- based environment geared to simulate a hardware-based environment, for the benefit of the applications it will host. Conventional applications are designed to be managed by an oper- ating system and executed by a set of processor cores. Such applications can run within a VM without any rearchitecture. On the contrary, container technology has been around for more than a decade and is an approach to software development in which pieces of code are packaged in a standardized way SO that they can quickly be plugged in and run on the Linux operating system. This enables portability of code and allows the operating system to be virtualized and share an instance of an operating system in a same way that a VM would divide a server. Therefore instead of virtualizing the hardware like a VM, a container virtua- lizes at the operating system level. Containers run at a layer on top of the host operating system and they share the kernel. Containers have much lower overhead relative to the VMs and much smaller footprint. NFV decouples software implementations of NFs from the compute, storage, and networking resources they use. It thereby expands options for both enterprises and service providers, enabling both to create new capabilities and new services for their customers. With new opportunities come new challenges. By tradition, NF implementations are packaged with the infrastructure they utilize; however, this may not be the case anymore. As the physical net- work is decoupled from the infrastructure and network services, it is necessary to create both new management tools and orchestration solutions for providers to
realize the benefits of NFV-based solutions. There are a number of challenges to implement NFV, which need to be addressed by the industry. Some of the main challenges include the following [43]: Portability/interoperability: This is the ability to load and execute virtual appliances in different but standardized data center environments, which can be provided by different vendors for different operators. The challenge is to define a unified interface which clearly decouples the software instances from the underlying hardware, as represented 5G Network Architecture by VMs and their hypervisors. Portability and interoperability are very important as they create different ecosystems for virtual appliance vendors and data center vendors, while both ecosystems are clearly coupled and depend on each other. Portability also provides the operator with the freedom to optimize the location and required resources of the virtual appliances without constraints. Performance trade-off: Since the NFV approach is based on conventional hardware as opposed to customized hardware, there could be a possible decrease in performance. The challenge is how to limit the performance degradation by using appropriate hyper- visors, hardware accelerators, and advanced software technologies such that the effects on latency, throughput, and processing overhead are minimized. Migration, coexistence, and compatibility with the existing platforms: Implementations of NFV must coexist with network operators' legacy network equipment, and further must be compatible with their existing element management systems (EMSs), network management systems (NMSs), operations support system (OSS), 11 and business support system (BSS), and potentially existing IT orchestration systems, if NFV orchestration and IT orchestration need to converge. The NFV architecture must support a migration path from today's proprietary physical network appliance-based solutions to more open standards-based virtual network appliance solutions. In other words, NFV must work in a
hybrid network composed of classical physical network appliances and virtual net- work appliances. Virtual appliances must therefore use existing north-bound interfaces (for management and control) and interwork with physical appliances implementing the same functions. Management and orchestration: NFV presents an opportunity through the flexibility afforded by software network appliances operating in an open and standardized infra- structure to rapidly align MANO north-bound interfaces to well-defined standards and abstract specifications. Therefore, a consistent MANO architecture is required. This will greatly reduce the cost and time to integrate new virtual appliances into a network Element management system consists of systems and applications for managing network elements on the net- work element management layer. An element management system manages a specific type of telecommuni- cations network element. The element management system typically manages the functions and capabilities within each network element but does not manage the traffic between different network elements in the net- work. To support management of the traffic between network elements, the element management system communicates upward to higher-level network management systems, as described in the telecommunications management network layered model. Operations support system is a platform used by service providers and network operators to support their net- work systems. The operations support system can help the operators to maintain network inventory, provision services, configure components, and resolve network issues. It is typically linked with the business support system to improve the overall customer experience. Business support systems (BSS) are platforms used by service providers, network operator delivery product management, customer management, revenue management (billing) and order management applications that help them run their business operations. Business support system platforms are often linked to operations support s
ystem platforms to support the overall delivery of services to customers. 16 Chapter 1 operator's operating environment. The SDN further extends this concept to streamlining the integration of packet and optical switches into the system, for example, a virtual appliance or NFV orchestration system may control the forwarding behavior of physical switches using SDN. Note that NFV will only scale, if all of the functions can be automated. Security and resilience: Network operators need to be assured that the security, resil- ience, and availability of their networks are not compromised when VNFs are introduced. The NFV improves network resilience and availability by allowing NFs to be recreated on demand after a failure. A virtual appliance should be as secure as a physical appliance if the infrastructure, particularly the hypervisor and its configuration, is secure. Network operators will be seeking tools to control and verify hypervisor configurations. They will also require security-certified hypervisors and virtual appliances. Network stability: It is important to ensure that the stability of the network is not impacted when managing and orchestrating a large number of virtual appliances created by different hardware and hypervisor vendors. This is particularly important when vir- tual functions are relocated or during reconfiguration events (e.g., due to hardware and software failures) or due to a cyber attack. This challenge is not unique to NFV systems and such unsteadiness might also occur in current networks. It should be noted that the occurrence of network instability may have adverse effects on performance parameters or optimized use of resources. Complexity: It must be ensured that virtualized network platforms will be simpler to operate than those that exist today. A significant focus area for network operators is simplification of the plethora of complex network platforms and support systems which have evolved over decades of network technology evolution, while maintaining continu- ity to support impo
rtant revenue-generating services. Integration: Seamless integration of multiple virtual appliances into existing industrial- grade servers and hypervisors is a major challenge for NFV schemes. Network operators need to be able to combine servers, hypervisors, and virtual appliances from different vendors without incurring significant integration costs. The ecosystem offers integration services and maintenance and third-party application support. It must be possible to resolve integration issues between several suppliers. The ecosystem will require mechanisms to validate new NFV products. Tools must be identified and/or created to address these issues. 1.1.2.1 Architectural Aspects The NFV initiative began when network operators attempted to accelerate deployment of new network services in order to advance their revenue and growth plans. They found that customized hardware-based equipment limited their ability to achieve these goals. They studied standard IT virtualization technologies and found NFV helped accelerate service innovation and provisioning in that space [48]. 5G Network Architecture Management and orchestration (MANO) Vi-Vnfm Or-Vi Virtualized infrastructure NFV infrastructure (NFVI) manager (VIM) Nf-Vi Software-defined networking (SDN) controller Figure 1.3 High-level concept of a virtualized network. Fig. 1.3 illustrates the conceptual structure of a virtualized network and its main compo- nents. Following conversion of physical NFs to software, that is, virtual network functions (VNFs), the software applications need a platform to run. The NFV infrastructure (NFVI) consists of physical and virtual compute, storage, and networking resources that VNFs need to run. The NFVI layer primarily interacts with two other NFV framework components: VNFs and the virtual infrastructure manager (VIM). As we mentioned earlier, the VNF soft- ware runs on NFVI. The VIM, on the other hand, is responsible for provisioning and man- aging the virtual infrastructure. As shown in Fig. 1.4, the VNF to NFVI interface (Vn-N
f) constitutes a data path through which network traffic traverses, while the NFV to VIM inter- face (Nf-Vi) creates a control path that is used solely for management but not for any net- work traffic. The NFVI consists of three distinct layers: physical infrastructure, virtualization layer, and the virtual infrastructure. The VIM manages the NFVI and acts as a conduit for control path interaction between VNFs and NFVI. In general, the VIM provi- sions, de-provisions, and manages virtual compute, storage, and networking while commu- nicating with the underlying physical resources. The VIM is responsible for operational aspects such as logs, metrics, alerts, analytics, policy enforcement, and service assurance. It is also responsible for interacting with the orchestration layer and SDN controller. Unlike the NFVI which consists of several technologies that can be assembled independently, the VIM comes in the form of complete software stacks. OpenStack13 is the main VIM software stack which is very common in NFV realization. OpenStack software controls large pools of compute, storage, and networking resources throughout a data center, managed through a dashboard or via the OpenStack application programming interface. OpenStack works with popular enterprise and open source technologies, making it ideal for heterogeneous infrastructure (https://www.openstack.org/). 18 Chapter 1 OSS/BSS NFV orchestrator (NFVO) Os-Ma-Nfvo Or-Vnfm catalog catalog instances resources Ve-Vnfm-em VNF manager (VNFM) Ve-Vnfm-vnf Service, VNF, and infrastructure Vn-Nf description Virtual Virtual Virtual Vi-Vnfm computing storage networking Nf-Vi Or-Vi Virtualized infrastructure manager Virtualization layer (VIM) NFV-MANO Computing Storage Networking hardware hardware hardware Figure 1.4 NFV architecture [37,40]. As we mentioned earlier, NFV defines standards for compute, storage, and networking resources that can be used to build VNFs. The NFVI is a key component of the NFV archi- tecture that describes the hardware and software components on
which virtual networks are built. The NFV leverages the economies of scale of the IT industry. The NVFI is based on widely available and low-cost, standardized computing components. The NFVI works with different types of servers, for example, virtual or bare metal, software, hypervisors, VMs, and VIMs in order to create a platform for VNFs to run. The NFVI standards help increase the interoperability of the components of the VNFs and enable multivendor environments [42-44]. The NFV architecture comprises major components including VNFs, NFV-MANO, and NFVI that work with traditional network components like OSS/BSS. The NFVI further con- sists of NFVI points-of-presence (NFVI-PoPs¹4 which are the sites at which the VNFs are deployed by the network operator, including resources for computation, storage, and net- working. NFVI networks interconnect the computing and storage resources contained in an NFVI-PoP. This may include specific switching and routing devices to allow external con- nectivity. The NFVI works directly with VNFs and VIMs and in concert with the NFV orchestrator (NFVO). NFV services are instantiated and instructed by the NFVO, which uti- lizes VIMs that manage the resources from the underlying infrastructure. The NFVI is Network function virtualization infrastructure points-of-presence is a single geographic location where a number of network function virtualization infrastructure nodes are situated. 5G Network Architecture critical to realizing the business benefits outlined by the NFV architecture. It delivers the actual physical resources and corresponding software on which VNFs can be deployed. NFVI creates a virtualization layer on top of the hardware and abstracts the hardware resources, SO they can be logically partitioned and allocated to the VNF in order to perform their functions. NFVI is also critical to building more complex networks without geographi- cal limitations of traditional network architectures. A network service can be viewed architecturally as a forwarding graph of NFs inter
con- nected by the supporting network infrastructure. These NFs can be implemented in a sin- gle operator network or inter-work between different operator networks. The underlying NF behavior contributes to the behavior of the higher level service. Therefore, the net- work service behavior is a combination of the behavior of its constituent functional blocks, which can include individual NFs, NF sets, NF forwarding graphs, and/or the infrastructure network. The end points and the NFs of the network service are represented as nodes and correspond to devices, applications, and/or physical server applications. An NF forwarding graph can have NF nodes connected by logical links that can be unidirec- tional, bidirectional, multicast, and/or broadcast. Fig. 1.4 shows the NFV architectural framework depicting the functional blocks and reference points in the NFV framework. The main reference points and execution reference points are shown by solid lines and are in the scope of European Telecommunications Standards Institute (ETSI) NFV specifica- tion [37-41]. The dotted reference points are available in present deployments but may need extensions for handling NFV. However, the dotted reference points are not the main focus of the NFV at present. The illustrated architectural framework focuses on the func- tionalities necessary for the virtualization and the resulting operation of the network. It does not specify which NFs should be virtualized, as that is solely a decision of the net- work operator. 1.1.2.2 Functional Aspects The NFV architectural framework, shown in Fig. 1.4, identifies functional blocks and the main reference points between the blocks. Some of these blocks are already present in cur- rent deployments, whereas others might be necessary additions in order to support the vir- tualization process and the subsequent operation. The functional blocks are as follows [37]: VNF is a virtualization of a network function in a legacy non-virtualized network. Element management (EM) performs the typical management
functionality for one or several VNFs. NFV elements are the discrete hardware and software requirements that are managed in an NFV installation to provide new communication services and appli- cation services on commodity-based hardware. NFV services are deployed on commer- cial off-the-shelf hardware platform, typically run on x86-based or ARM-based computing platform and standard switching hardware. The early model of NFV, ETSI MANO, is a common reference architecture. The NFV architecture developed by ETSI 20 Chapter 1 MANO includes EMSs, which describe how individual VNFs are managed on a com- modity hardware platform. NFVI represents the entire hardware and software components which create the environ- ment in which VNFs are deployed, managed, and executed. The NFVI can span across several locations, that is, places where NFVI-PoPs are operated. The network providing connectivity between these locations is regarded as part of the NFVI. Virtualization layer abstracts the hardware resources and decouples the VNF software from the underlying hardware, thus ensuring a hardware independent life cycle for the VNFs. The virtualization layer is responsible for abstracting and logically partitioning physical resources; enabling the software that implements the VNF to use the underly- ing virtualized infrastructure; and providing virtualized resources to the VNF. The vir- tualization layer ensures VNFs are decoupled from hardware resources, thus the software can be deployed on different physical hardware resources. Typically, this type of functionality is provided for computing and storage resources in the form of hypervi- sors and VMs. A VNF can be deployed in one or several VMs. VIM(s) comprises the functionalities that are used to control and manage the interaction of a VNF with computing, storage, and network resources under its authority as well as their virtualization. NFVO is in charge of the orchestration and management of NFVI and software resources and realizing network services on NFVI. VNF manager(s) is re
sponsible for VNF life cycle management (e.g., instantiation, update, query, scaling, and termination). Multiple VNF managers may be deployed where a VNF manager may be deployed for each VNF or multiple VNFs. Service, VNF and infrastructure description is a data set which provides information regarding the VNF deployment template, VNF forwarding graph, service-related infor- mation, and NFVI information models. These templates/descriptors are used internally within NFV-MANO. The NFV-MANO functional blocks handle information contained in the templates/descriptors and may expose (subsets of) such information to applicable functional blocks. Operations and Business Support Systems (OSS/BSS) The management and organization working group of the ETSI¹ has defined the NFV- MANO architecture. According to ETSI specification, NFV-MANO comprises three major functional blocks: VIM, VNF manager, and NFVO [37,40]. The VIM is a key component of the NFV-MANO architectural framework. It is responsible for controlling and managing the NFVI compute, storage, and network resources, usually within one operator's infrastruc- ture domain (see Fig. 1.4). These functional blocks help standardize the functions of virtual European Telecommunications Standards Institute, http://www.etsi.org/.ETSI, network function virtualization specifications are listed and can be found at http://www.etsi.org/technologies-clusters/technologies/nfv. 5G Network Architecture 21 networking to increase interoperability of SDN elements. The VIMs can also handle hard- ware in a multi-domain environment or may be optimized for a specific NFVI environment. The VIM is responsible for managing the virtualized infrastructure of an NFV-based solu- tion. The VIM operations include the following: It maintains an inventory of the allocation of virtual resources to physical resources. This allows the VIM to orchestrate the allocations, upgrade, release, and retrieval of NFVI resources and optimize their use. It supports the management of VNF forwarding graphs by organizin
g virtual links, net- works, subnets, and ports. The VIM also manages security group policies to ensure access control. It manages a repository of NFVI hardware resources (compute, storage, and networking) and software resources (hypervisors), along with the discovery of the capabilities and features to optimize the use of such resources. The VIM performs other functions as well, such as collecting performance and failure information via notifications; managing software images (add, delete, update, query, or copy) as requested by other NFV-MANO functional blocks; and managing catalogs of virtualized resources that can be used by NFVI. In summary, the VIM is a management layer between the hardware and the software in an NFV domain. VIMs are critical to realizing the business benefits that can be provided by the NFV architecture. They coordinate the physical resources that are necessary to deliver network services. This is particularly noticeable by infrastructure-as-a-service providers, where their servers, networking, and storage resources must work smoothly with the soft- ware components running on top of them. They must ensure that resources can be appropri- ately allocated to fulfill the dynamic service requirements. The NFVI consists of three distinct layers: physical infrastructure, virtualization layer, and the virtual infrastructure, as shown in Fig. 1.4. The NFVI hardware consists of computing, storage, and networking components. OpenStack is often used in conjunction with NFV technology in data centers to deploy cloud services, especially communication services offered by large service providers and cloud providers. OpenStack is an open source virtua- lization platform. It enables the service providers to deploy VNFs using commercial off-the- shelf hardware. These applications are hosted in a data center SO that they could be accessed via the cloud, which is the underlying model to use NFV. The VIM manages the NFVI and serves as a conduit for control path interaction between VNFs and NFVI. The VIM assign
s, provisions, de-provisions, and manages virtual computing, storage, and networking resources while communicating with the underlying physical resources. The VIM is respon- sible for operational aspects, such as logs, metrics, alerts, root cause analysis, policy enforcement, and service assurance. It is also responsible for interacting with the orchestra- tion layer (MANO) and SDN controller. 22 Chapter 1 NFV is managed by NFV-MANO, which is an ETSI-defined framework for the manage- ment and orchestration of all resources in the cloud data center. This includes computing, networking, storage, and VM resources. The main focus of NFV-MANO is to allow flexible on-boarding and to avoid the possible disorder that can arise during transition states of net- work components. As we mentioned earlier, the NFV-MANO consists of three main components: NFVO is responsible for on-boarding of new network services and VNF packages; net- work service life cycle management; global resource management; and validation and authorization of NFVI resource requests. The NFVO is a key component of the NFV- MANO architectural framework, which helps standardize the functions of virtual net- working to increase interoperability of SDN-controlled elements. Resource management is important to ensure there are adequate compute, storage, and networking resources available to provide network services. To meet that objective, the NFVO can work either with the VIM or directly with NFVI resources, depending on the requirements. It has the ability to coordinate, authorize, release, and engage NFVI resources independent of any specific VIM. It can also control VNF instances sharing resources of the NFVI. 2. VNF manager oversees life cycle management of VNF instances; coordination and adaptation role for configuration; and event reporting between NFVI and EMs. VIM controls and manages the NFVI compute, storage, and networking resources. 1.1.2.3 Operational Aspects A VNF may be composed of one or multiple VNF components (VNFC). A VNFC may be a software
entity deployed in the form of a virtualization container. A VNF realized by a set of one or more VNFCs appears to the outside as a single, integrated system; however, the same VNF may be realized differently by each VNF provider. For example, one VNF developer may implement a VNF as a monolithic, vertically integrated VNFC, and another VNF developer may implement the same VNF using separate VNFCs, for example, one for the control plane, one for the user plane, and one for the EM. VNFCs of a VNF are con- nected in a graph. For a VNF with only a single VNFC, the internal connectivity graph is a null graph [37]. A VNF can assume a number of internal states to represent the status of the VNF. Transitions between these states provide architectural patterns for some expected VNF func- tionality. Before a VNF can start its life cycle, it is a prerequisite that the VNF was on- boarded (process of registering the VNF with the NFVO and uploading the VNF descriptor). Fig. 1.5 provides a graphical overview of the VNF states and state transitions. Each VNFC of a VNF is either parallelizable or nonparallelizable. If it is parallelizable, it may be instan- tiated multiple times per VNF instance, but there may be a constraint on the minimum and maximum number of parallel instances. If it is nonparallelizable, it is instantiated once per VNF instance. Each VNFC of a VNF may need to handle the state information, where it can be either stateful or stateless. A VNFC that does not have to handle state information is 5G Network Architecture Configure upgrade/update/ rollback/scale (in/out /up/down) Instantiated Reset configured Instantiated Inactive Start Active not configured Configure Upgrade/update/ rollback/scale Terminate Instantiate Terminate (in/out/up/down) Terminated Figure 1.5 VNF instance and state transitions [37]. a stateless VNFC. A VNFC that needs to handle state information may be implemented either as a stateful VNFC or as a stateless VNFC with an external state where the state information is held in a data reposito
ry external to the VNFC. Depending on the type of VNF, the instantiation can be more or less complex. For a sim- ple VNF consisting of a single VNFC, the instantiation based on VNF descriptor is straightforward, while for a complex VNF consisting of several VNFCs connected via vir- tual networks, the VNF manager may require support from an internal function that is implemented by VNF to facilitate the process of instantiation. As an example, the VNF manager could boot a predefined number of VNFCs, leaving booting of the remaining VNFCs and the configuration of the VNF to an internal VNF provider-specific process that may also involve an EM. The VNF descriptor is a specification template provided by the VNF developer for describ- ing virtual resource requirements of a VNF. It is used by the NFV-MANO functions to determine how to execute VNF life cycle operations such as instantiation. The NFV-MANO functions consider all VNF descriptor attributes to check the feasibility of instantiating a given VNF. There are several options for how the instances of individual VNFCs can be created, which can be fully or partially loaded virtualization containers; or empty virtualiza- tion containers prepared for booting and loading. It is then the responsibility of the VNF MANO functions to instruct the VIM to create an empty virtualization container with an associated interface that is ready for use. To instantiate a VNF, the VNF manager creates the VNF's set of VNFC instances as defined in the VNF descriptor by executing one or more VNFC instantiation procedures. The VNF descriptor defines which VNFC instances may be created in parallel or sequen- tially as well as the order of instantiation. The set of created VNFC instances may already 24 Chapter 1 correspond to a complete VNF instance. Alternatively, it may contain only a minimal set of VNFC instances needed to boot the VNF instance. The VNF manager requests a new VM and the corresponding network and storage resources for the VNFC instance according to the definition in the V
NF descriptor or uses a VM and the corresponding network and stor- age resources previously allocated to it. Following successful completion of this process, the VNF manager requests to start the VM [37]. 1.1.2.4 Legacy Support and Interworking Aspects In general, the behavior of a complete system can be characterized when the constituent functional blocks and their interconnections are specified. An inherent property of a functional block (in traditional sense) is that its operation is autonomous. The behavior of a functional block is characterized by the static transfer function of the functional block, the dynamic state of the functional block, and the inputs/outputs received/generated at the corresponding reference points. If a functional block is disconnected from an immediately preceding functional block, it will continue to function and generate outputs; however, it will process a null or invalid input. As we mentioned earlier, the objective of NFV is to sep- arate software that defines the NF (the VNF) from the hardware and the generic software that creates the hosting NFVI on which the VNF runs. Therefore, it is a requirement that the VNFs and the NFVI be separately specified. However, this is a requirement that is not immediately satisfied by traditional method of functional blocks and associated interfaces. Fig. 1.6 shows an example where a traditional network comprising three functional blocks is evolved into a hypothetical case where two of the three functional blocks have been vir- tualized. In each case, the functional block is implemented as a VNF that runs on a host function in the NFVI. However, in this process, there are two important differences with the standard functional block representation that must be noted. The VNF is not a functional block independent of its host function, because the VNF cannot exist autonomously in the way that a functional block can exist. The VNF depends on the host function for its Virtualized interface Virtualized interface Network Network Network Network Contain
er interfaces function 1 function 2 function 3 function 1 Interface Interface Infrastructure interface function 2 Infrastructure function 3 interface Traditional system Partially virtualized system Figure 1.6 Traditional and virtualized network functions [40]. 5G Network Architecture VNF management and orchestration function MANO/VNF interfaces Management and orchestration function Legacy management Virtualized interface interface Management Virtualized VNF 2 VNF 3 interfaces interface Network Network Network Network Container interfaces function 1 function 2 function 3 function 1 Interface Interface Infrastructure interface function 2 Infrastructure function 3 interface MANO/VNF interfaces Infrastructure management and orchestration function Traditional system Partially virtualized system Figure 1.7 MANO of traditional and virtualized network functions [40]. existence and if the host function is interrupted, or disappears, then the VNF will be inter- rupted or disappear. Similarly, the container interface reflects this existence dependency between a VNF and its host function. The relationship between the VNF and its host func- tion can be described as follows: the VNF is a configuration of the host function and the VNF is an abstract view of the host function when the host function is configured by the VNF. Therefore a host function, when configured with a VNF, has the external appearance as a functional block in traditional sense, implementing the VNF specification. It is the host function that is the functional block, but it externally appears to be the VNF. Equivalently, the VNF is an abstract view of the host function. In an operator's network, NFs can be remotely configured and managed. For this purpose, the NFs have an interface, often referred to as a north-bound interface, to the MANO function. The MANO function is often complex and includes a large number of distributed components. However, it can be characterized using the same system model comprising functional blocks and their interfaces (see Fig. 1.
7). The objective of NFV is to separate the VNFs from the infrastructure including their management. As shown in Fig. 1.7, the MANO functions are divided between the MANO of the NFVI and the MANO of the VNFs. The MANO of the NFVI is an integral part of the NFV framework. One possible scenario is the management of the existing NFs that are partially virtualized by an NFV deployment. Managing the VNFs using the existing systems can be used for the deployment of NFV in the transition period as illustrated in Fig. 1.7. The removal of the hardware from the VNFs elimi- nates the requirement of managing hardware aspects. The flexibility provided by NFV can only 26 Chapter 1 be fully achieved, if the MANO implements efficient VNF life cycle management process adapted to new requirements such as fast order delivery, fast recovery, and auto-scaling. 1.1.3 Separation of Control and User Planes (Software-Defined Networks) SDN is an emerging architecture that is dynamic, manageable, cost-effective, and adaptable, making it ideal for the high bandwidth, dynamic nature of today's applications. This architec- ture decouples the network control and forwarding functions, enabling the network control to become directly programmable and the underlying infrastructure to be abstracted for applica- tions and network services. The OpenFlow protocol is a fundamental element for building SDN solutions. The OpenFlow standard, created in 2008, was recognized as the first SDN architecture that defined how the control- and data-plane elements would be separated and communicate with each other using the OpenFlow protocol. The Open Networking Foundation (ONF) 16 is the body in charge of managing OpenFlow standards, which are open- source specifications. However, there are other standards and open-source organizations with SDN resources, thus OpenFlow is not the only protocol that makes up SDN framework. SDN is a complementary approach to NFV for network management. While they both manage net- works, both rely on different methods. SDN offers a
centralized view of the network, giving an SDN controller the ability to act as the intelligence of the network. As shown in Fig. 1.8, the SDN controller communicates with switches and routers via south-bound APIs and to the applications with north-bound APIs. In the SDN architecture, the splitting of the control and data forwarding functions is referred to as disaggregation because these components can be sourced separately, rather than deployed as a single integrated system. This architecture pro- vides the applications with more information about the state of the entire network from the controller's perspective compared to the traditional networks where the network is application aware. The SDN architectures generally consist of three functional groups, as follows: SDN applications: The application plane consists of applications such as routing and load balancing, which communicates with the SDN controller in the control plane through north-bound interfaces (e.g., REST17 and JSON ¹8. SDN applications are pro- grams that communicate behaviors and needed resources with the SDN controller via APIs. In addition, the applications can build an abstracted view of the network by col- lecting information from the controller for decision-making purposes. These 16 Open Networking Foundation (https://www.opennetworking.org). A REST application programming interface, also referred to as a RESTful web service, is based on Representational State Transfer (REST) scheme that is an architectural style and approach to communica- tions often used in web services development. REST-compliant web services allow requesting systems to access and manipulate textual representations of web resources using a uniform and predefined set of state- less operations. A RESTful API is an application program interface that uses HTTP requests to GET, PUT, POST, and DELETE data. JSON or JavaScript Object Notation, is a minimal, readable format for structuring data. It is used primarily to transmit data between a server and web application, as an al
ternative to XML. 5G Network Architecture Application layer Business applications Control layer Network services OpenFlow Infrastructure layer Network Network devices devices Network Network devices devices Figure 1.8 Illustration of the SDN concept [49]. applications include network management, analytics, or business applications that are used to run large data centers. For example, an analytics application might be built to recognize suspicious network activity for security purposes. SDN controller: The SDN controller is a logical entity that receives instructions or requirements from the SDN application layer and relays them to the networking components. The controller also extracts information about the network from the hardware devices and communicates back to the SDN applications with an abstract view of the network, including statistics and events. The control plane consists of one or a set of SDN controllers (e.g., Open Network Operating System19 OpenDayLight²0 which logically maintain a global and dynamic network view, and provide control tasks to manage the network devices in the user plane via south-bound Open Network Operating System is an open-source software-defined network operating system (https://wiki.- onosproject.org/). OpenDaylight is an open-source software-defined networking project hosted by Linux Foundation, which was created in order to advance software-defined networking adoption and to create a strong basis for net- work function virtualization. It was created as a community-led and industry-supported open-source frame- work. The goal of the OpenDaylight project is to offer a functional software-defined networking platform that can provide the users with directly deployable software-defined networking platform without the need for other components. In addition to this, contributors and vendors can deliver add-ons and other pieces that will offer more value to OpenDaylight (https://www.opendaylight.org/). 28 Chapter 1 interfaces (e.g., OpenFlow or ForCES²1) based on requests from the app
lications. The controllers communicate with each other using east-west-bound interfaces. SDN networking devices: The SDN networking devices on the infrastructure layer control the forwarding and data processing capabilities of the network. This includes forwarding and processing of the data path. The user plane is composed of data for- warding elements, such as virtual/physical switches and routers, which forward and route the data packets based on the rules prescribed by the SDN controllers. This plane is responsible for all activities related to provisioning and monitoring of the networks. The SDN architecture APIs are often referred to as north-bound and south-bound interfaces, defining the communication between the applications, controllers, and networking systems. A north-bound interface is defined as the connection between the controller and applica- tions, whereas the south-bound interface is the connection between the controller and the physical networking hardware. Since SDN is a virtualized architecture, these elements do not have to be physically co-located. An SDN controller platform typically contains a col- lection of pluggable modules that can perform different network functions such as tracking device inventory within the network along with maintaining information about device capa- bilities, network statistics and analytics, etc. Extensions can be inserted in the SDN control- ler to enhance its functionality in order to support more advanced capabilities such as running algorithms to perform analytics and orchestrating new rules throughout the network. The centralized, programmable SDN environments can easily adjust to the rapidly evolv- ing needs of enterprise networks. The SDN can lower the cost and limit the uneconomi- cal provisioning and can further provide flexibility in the networks. As already mentioned above, SDN and NFV are set to play key roles for operators as they prepare to migrate from 4G to 5G and to gradually scale their networks. NFVO and NFVI, along with SDN, are critical to th
e success of 5G rollouts, enabling agile infrastructure that can adapt to network slicing, low latency, and high-capacity requirements of major 5G use cases. One of the key architectural enhancements in 3GPP's 5G network is control- and user-plane separation (CUPS) of EPC nodes, which enables flexible network deployment and operation Forwarding and control element separation defines an architectural framework and associated protocols to standardize information exchange between the control plane and the user/forwarding plane in a forwarding and control element separation network element. IETF RFC 3654 and RFC 3746 have defined the forwarding and control element separation requirements and framework, respectively (see https://tools.ietf.org/html/rfc5810). 5G Network Architecture 29 by distributed or centralized deployment and the independent scaling between control-plane and user-plane functions. In other words, the network equipment is now changing from closed and vendor-specific to open and generic with SDN architectural model, which enables the separation of control and data planes, and allows networks to be programmed through open interfaces. With NFV, network functions that were previously realized in costly customized-hardware platforms are now implemented as software appliances running on low-cost commodity hardware or running in cloud computing environments. By splitting the network entities in this manner (i.e., from serving gateway (SGW) to SGW-C and SGW-U and from packet gateway (PGW) to PGW-C and PGW-U), it is possible to scale these components independently and to enable a range of deployment options. The protocol used between the control and user planes can be either an extension of the existing OpenFlow protocol or new interfaces which have been specified as part of 3GPP CUPS work item [1]. 1.1.3.1 Architectural Aspects Next-generation networks are experiencing an increase in use of very dense deployments where user terminals will be able to simultaneously connect to multiple transmission points. It
is a significant advantage for the next-generation access networks to design the architecture on the premises of separation of the control-plane and user-plane functions. This separation would imply allocation of specific control-plane and user-plane functions between different nodes. As we stated earlier, the goal of SDN is to enable cloud and net- work engineers and administrators to respond quickly to changing business requirements via a centralized control platform. SDN encompasses multiple types of network technolo- gies designed to make the networks more flexible and agile to support the virtualized server and storage infrastructure of the modern data centers. The SDN was originally defined as an approach to designing, building, and managing networks that separates the network's control and forwarding planes, enabling the network control to become directly programmable and the underlying infrastructure to be abstracted for applications and net- work services. All SDN models have some version of an SDN controller, as well as south-bound APIs and north-bound APIs. As shown in Fig. 1.9, an SDN controller is interfaced with the application layer and the infrastructure layer via north-bound and south-bound APIs, respectively. As the intelligence of the network, SDN controllers pro- vide a centralized view of the overall network and enable network administrators to instruct the underlying systems (e.g., switches and routers) on how the forwarding plane should route/handle network traffic. SDN uses south-bound APIs to relay information to the switches and routers. OpenFlow, considered the first standard in SDN, was the original south-bound API and remains as one of the most commonly used protocols. Despite some belief considering OpenFlow and SDN to be the same, OpenFlow is merely one piece of the larger SDN framework. SDN uses north-bound APIs to communicate with the 30 Chapter 1 Application plane North-bound interface (NBI) RESTful, JSON Control plane SDN controller/network operating system SDN controllers (Open
SDN framework and its main components [71]. applications in the application layer. These help network administrators to programmati- cally shape the traffic and deploy services. In the SDN architecture for 5G networks, shown in Fig. 1.9, there are notably three layers: an infrastructure layer (user plane), a control layer (control plane), and an application layer (application plane). The infrastructure layer mainly consists of forwarding elements (e.g., physical and virtual switches, routers, wireless access points) that comprise the data plane. These devices are mainly responsible for collecting network status, storing them temporarily in local network devices, and sending the stored data to the network controllers and for managing packets based on the rules set by the network controllers or administrators. They allow the SDN architecture to perform packet switching and forwarding via an open inter- face. The control layer/control plane maintains the link between the application layer and the infrastructure layer through open interfaces. Three communication interfaces allow the controller to interact with other layers: the south-bound interface for interacting with the infrastructure layer, the north-bound interface for interacting with the application layer, and east-west-bound interfaces for communicating with groups of controllers. Their functions may include reporting network status and importing packet-forwarding rules and providing various service access points (SAPs) in various forms. The application layer is designed mainly to fulfill user requirements. It consists of the end-user business applications that uti- lize network services and resources. The SDN applications are able to control and access switching devices at the data layer through the control-plane interfaces. The SDN applica- tions include network visualization, dynamic access control, security, mobility, cloud com- puting, and load balancing. The functionalities of an SDN controller can be classified into four categories: (1) a high-level l
anguage for SDN applications to define their network operation policies; (2) a rule update process to install rules generated from those policies; (3) a network status 5G Network Architecture 31 collection process to gather network infrastructure information; and (4) a network status synchronization process to build a global network view using the network status collected by each individual controller. One of the basic functions of the SDN controller is to trans- late application specifications into packet-forwarding rules. This function uses a protocol to address communication between its application layer and control layer. Therefore, it is imperative to utilize some high-level languages (e.g., + +, Java, and Python) for the development of applications between the interface and the controllers. An SDN controller is accountable for generating packet-forwarding rules as well as clearly describing the policies and installing the rules in relevant devices. The forwarding rules can be updated with policy changes. Furthermore, the controller should maintain consistency for packet forwarding by using either the original rule set/updated rule set or by using the updated rules after the update process is completed. The SDN controllers collect network status to provide a global view of the entire network to the application layer. The network status includes time dura- tion, packet number, data size, and flow bandwidth. Unauthorized control of the centralized controller can degrade controller performance. Generally, this can be overcome by main- taining a consistent global view of all controllers. Moreover, SDN applications play a sig- nificant role in ensuring application simplicity and guaranteeing network consistency. In most SDNs, OpenFlow is used as the south-bound interface. OpenFlow is a flow-oriented protocol and includes switches and port abstraction for flow control. The OpenFlow proto- col is currently maintained by ONF and serves as a fundamental element for developing SDN solutions. The OpenFlow, the first st
andard interface linking the forwarding and con- trols layers of the SDN architecture, allows management and control of the forwarding plane of network devices (e.g., switches and routers) both physically and virtually. The OpenFlow helps SDN architecture to adapt to the high bandwidth and dynamic nature of user applications, adjust the network to different business needs, and reduce management and maintenance complexity. It must be noted that OpenFlow is not the only protocol avail- able or in development for SDN. To work in an OpenFlow environment, any device that wants to communicate to an SDN controller must support the OpenFlow protocol. The SDN controller sends changes to the switch/router flow table through south-bound interface, allowing network administrators to partition traffic, control flows for optimal performance, and start testing new configurations and applications (see Fig. 1.10). The OpenFlow features support a number of commonly used data-plane protocols, ranging from layer-2 to layer-4, with packet classification being performed using stateless match tables, and packet processing operations, known as actions or instructions, ranging from header modification, metering, QoS, packet replication (e.g., to implement multicast or link aggregation), and packet encapsulation/de-encapsulation. Various statistics are defined per port, per table, and per table entry. Information can be retrieved on demand or via notifica- tions. OpenFlow is, however, not merely an interface. It also defines the expected behavior of the switch and how the behavior can be customized using the interface. An OpenFlow controller is an SDN controller that uses the OpenFlow protocol to connect and 32 Chapter 1 Action Stats Packet + byte counters 1. Forward packet to port(s) 2. Encapsulate and forward to controller 3. Drop packet 4. Send to normal processing pipeline Ingress Ethernet IP SRC IP DST protocol S-Port D-Port + Mask the fields to match Figure 1.10 Example of OpenFlow flow table entries 48,49]. SDN controller SDN contr
oller OpenFlow OpenFlow switch OpenFlow OpenFlow channel channel Control channel Group Meter table table table table table Pipeline Figure 1.11 Main components of an OpenFlow switch [49]. configure the network devices in order to find the best path for application traffic. OpenFlow controllers create a central control point to manage OpenFlow-enabled network components. An OpenFlow logical switch (see Fig. 1.11) consists of one or more flow tables and a group table, which perform packet lookups and forwarding, and one or more OpenFlow channels to an external controller. The switch communicates with the controller and the controller manages the switch via the OpenFlow switch protocol. Using the OpenFlow switch protocol, the controller can add, update, and delete flow entries in flow tables, both reactively (in response to packets) and proactively. As shown in Fig. 1.10, each flow table in the switch contains a set of flow entries; each flow entry consists of match fields, counters, and a set of instructions to apply to matching packets. Matching starts at the first 5G Network Architecture flow table and may continue to additional flow tables of the pipeline. Flow entries match packets in priority order, with the first matching entry in each table being used. If a matching entry is found, the instructions associated with the specific flow entry are exe- cuted. If no match is found in a flow table, the outcome depends on configuration of the table-miss flow entry, for example, the packet may be forwarded to the controllers over the OpenFlow channel, dropped, or may continue to the next flow table. Instructions asso- ciated with each flow entry either contain actions or modify pipeline processing. Actions included in instructions describe packet forwarding, packet modification, and group table processing. Pipeline processing instructions allow packets to be sent to subsequent tables for further processing and allow information, in the form of meta-data, to be com- municated between tables. Table pipeline processing s
tops when the instruction set associ- ated with a matching flow entry does not specify a next table; at this point the packet is usually modified and forwarded [49]. We now change our focus to 3GPP CUPS and discuss the efforts within 3GPP to enable SDN control of networks. 3GPP completed Rel-14 specification of control- and user-plane separation work item in June 2014, which is a key core network feature for many operators. Control- and user-plane separation of EPC nodes provides architecture enhancements for the separation of functionalities in the EPC's SGW, PGW, and traffic detection function (TDF). 22 This enables flexible network deployment and operation, distributed or centralized architecture, as well as independent scaling between control-plane and user-plane functions without affecting the functionality of the existing nodes as a result of the split. The user data traffic in operators' networks have been doubling on an annual basis in recent years due to increasing use of smart devices, proliferation of video streaming, and other broad- band applications. At the same time, there is a strong consumer demand for improved user experience, higher throughput, and lower latency. The CUPS scheme allows for reducing the latency of application/service by selecting user-plane nodes which are closer to the RAN or more appropriate for the intended usage type without increasing the number of control-plane nodes. It further supports increase of data traffic by enabling addition of user- plane nodes without changing the number of control-plane nodes (SGW-C, PGW-C, and TDF-C) in the network. The CUPS scheme further allows locating and scaling control-plane and user-plane resources of the EPC nodes independently as well as enabling independent evolution of the control-plane and user-plane functions. The CUPS paradigm is a precursor Traffic detection function has become an important element in the mobile networks due to the increasing complexities in managing data services, demand for personalization, and service differen
tiation. Traffic detection function provides communication service providers the opportunity to capitalize on analytics for traffic optimization, charging and content manipulation, working in conjunction with policy management and charging system. Traffic detection function enforces traffic policies based on predetermined rules or dynamically determined rules by the policy and charging rules function on data flows in real time. Traffic detection function was introduced together with Sd reference point as a means for traffic management in the 3GPP Rel-11 specifications using layer-7 traffic identification. 34 Chapter 1 Gx Gy Gw Gz S6b Gwn Gzn Gn/Gp -S5/S8- Pre-release 14 architecture Gx Gy Gw Gz S6b Gwn Gzn Gn/GpO SGW-C S5/S8C PGW-C TDF-C Gn/GpU SGW-U -S5/S8 PGW-U TDF-U 3GPP Release 14 architecture ehancements for CUPS Figure 1.12 Separation of control plane and user plane in EPC [1]. to the use of SDN concept in 3GPP networks. The following high-level principles were incorporated in the CUPS framework [8,36]: As shown in Fig. 1.12, the control-plane functions terminate control-plane protocols such as GTP-C, Diameter23 (Gx, Gy, Gz) and a control-plane function can interface multiple user-plane functions, as well as a user-plane function can be shared by multiple control-plane functions. A UE is served by a single SGW-C but multiple SGW-U can be selected for different packet data network (PDN) connections. A user-plane data packet may traverse multiple user-plane functions. Diameter is an application-layer protocol for authentication authorization and accounting. It is a message- based protocol, where authentication authorization and accounting nodes receive positive or negative acknowledgment for each message exchanged. For message exchange, Diameter uses the transmission con- trol protocol and stream control transmission protocol, which makes it more reliable. Diameter base protocol is specified in IETF RFC 6733 (https://tools.ietf.org/html/rfc6733). 5G Network Architecture The control-plane functions control the
processing of the packets in the user-plane by provisioning a set of rules in Sx sessions, that is, packet detection, forwarding, QoS enforcement, and usage reporting rules. While all 3GPP features impacting the user-plane functions (e.g., policy and charging control, lawful interception, etc.) are supported, the user-plane functions are designed to be 3GPP agnostic as much as possible. A legacy SGW, PGW, and TDF can be replaced by a split node without effecting con- nected legacy nodes. As shown in Fig. 1.12, CUPS introduces three new interfaces, namely Sxa, Sxb, and Sxc between the control-plane and user-plane functions of the SGW, PGW, and TDF, respectively. 3GPP evaluated candidate protocols such as OpenFlow, ForCES, and Diameter. The criteria identified for the selection process included ease of implementation on simple forwarding devices, no transport blocking, low latency, and capabilities to support the existing 3GPP features, ease of extension and maintenance of the protocols to support 3GPP features, and backward compatibility across releases. Based on these criteria, it was decided to define a new 3GPP native protocol with type-length value-encoded messages over user datagram protocol (UDP)/IP, called packet-forwarding control protocol (PFCP), for Sxa, Sxb, and Sxc interfaces [3gpp]. The protocol stack for the control-plane/user-plane over Sxa, Sxb, Sxc, and combined Sxa/Sxb reference points are depicted in Fig. 1.13. The PFCP is a new protocol layer, which has the following properties: One Sx association is established between a control-plane function and a user-plane function before being able to establish Sx sessions on the user-plane function. The Sx association may be established by the control-plane function or by the user-plane function. An Sx session is established in the user-plane function to provision rules instructing the user-plane function on how to process certain traffic. An Sx session may correspond to GTP-U GTP-U Control-plane Sx interface User-plane Control-plane Sx interface User-p
lane function function function function Control plane User plane Figure 1.13 Control-plane/user-plane protocol stack over Sxa, Sxb, Sxc, and combined Sxa/Sxb [8]. 36 Chapter 1 an individual PDN connection, TDF session, or this can be a standalone session and not tied to any PDN connection/TDF session, for example, for forwarding DHCP/RADIUS/ Diameter signaling between the PGW-C and PDN over SGi. Sx node-related procedures include Sx association setup/update/release procedures, monitoring peer PFCP, load and overload control procedures to balance loading across user-plane functions, and to reduce signaling toward user-plane function under overload conditions, Sx packet flow description (PFD) management procedure to provision PFDs for one or more application identifiers in the user-plane function. Sx session-related procedures include Sx session establishment/modification/deletion procedures; Sx session report procedure to report traffic usage or specific events. Data forwarding between the control-plane function and user-plane function is supported by GTP-U encapsulation on the user plane and PFCP on the control plane where the latter pro- tocol supports reliable delivery of messages. A set of new domain name system (DNS)²4 procedures are defined for user-plane function selection. The control-plane function selects a user-plane function based on DNS or local configuration, the capabilities of the user-plane function and the overload control information provided by the user-plane function. Figs 1.14 and 1.15 show two [example] deployment scenarios based on higher layer functional split between the central unit and the distributed unit (DU) of the base station reusing Rel- 12 dual-connectivity concepts. The control-plane/user-plane separation permits flexibility for different operational scenarios such as moving the PDCP to a central unit while retain- ing the radio resource management (RRM) in a master cell; moving the RRM to a more central location where it has oversight over multiple cells, allowing independent
scalability of user plane and control plane; and centralizing RRM with local breakout of data connec- tions of some UEs closer to the base station site. During the establishment of an Sx session, the control plane function and the user-plane function select and communicate to each other the IP destination address at which they expect to receive subsequent request messages related to that Sx session. The control-plane function and the user-plane function may change this IP address subsequently during an Sx session modification procedure. Typically, Ethernet should be used as a layer-2 protocol, but the network operators may use any other technologies. Domain name system is a hierarchical and decentralized naming system for computers, services, or other resources connected to the Internet or a private network. It associates various information with domain names assigned to each of the participating entities. Most prominently, it translates more readily memorized domain names to the numerical Internet protocol addresses needed for locating and identifying computer hosts and devices with the underlying network protocols. The domain name system delegates the responsibil- ity of assigning domain names and mapping those names to Internet resources by designating authoritative name servers for each domain. Network administrators may delegate authority over subdomains of their allo- cated name space to other name servers. This mechanism provides distributed and fault-tolerant service and was designed to avoid a single large central database. The domain name system also specifies the technical functionality of the database service that is at its core. It defines the domain name system protocol, a detailed specification of the data structures and data communication exchanges used in the domain name system, as part of the Internet protocol suite. 5G Network Architecture SGW/PGW Central unit PDCP-U Local 5G BTS management PDCP-C Figure 1.14 Centralized PDCP-U with local RRM [34]. 1.1.4 Network Slicing The combination of SDN a
nd NFV enables dynamic, flexible deployment and on-demand scaling of NFs, which are necessary for the development of the 5G packet core network. Such characteristics have also encouraged the development of network slicing and service function chaining. From a UE perspective, slicing a network is to group devices with similar performance requirements (transmission rate, delay, throughput, etc.) into a slice. From net- work perspective, slicing a network is to divide an underlying physical network infrastruc- ture into a set of logically isolated virtual networks. This concept is considered as an important feature of a 5G network, which is standardized by 3GPP. Service function chain- (SFC)25 or network service chaining allows traffic flows to be routed through an ordered Network service chaining or service function chaining is a capability that uses software-defined networking capabilities to create a service chain of connected network services and connect them in a virtual chain. This capability can be used by network operators to set up suites or catalogs of connected services that enable the use of a single network connection for many services, with different characteristics. The primary advantage of network service chaining is the way virtual network connections can be set up to handle traffic flows for connected services. For example, a software-defined networking controller may use a chain of services and apply them to different traffic flows depending on the source, destination, or type of traffic. 38 Chapter 1 Signaling control entity SGW/PGW PDCP-C Central unit PDCP-U Local 5G BTS Local 5G BTS management management Figure 1.15 Centralized PDCP with centralized RRM in separate platforms [34]. list of NFs (e.g., firewall, load balancers, etc.). The best practical use case of SFC is to chain NFs (i.e., middle boxes in this case) placed in the interface between PGW and the external networks. As depicted in Fig. 1.16, the network slicing architecture comprises three layers: (1) service instance layer, (2) netw
ork slice instance (NSI) layer, and (3) resource layer. The service instance layer represents the services (end-user or business services) which are supported by the network where each service is represented by a service instance. The services are typi- cally provided by network operator or a third party. A service instance can either represent an operator service or a third-party service. A network operator uses a network slice blue- print2 to create an NSI. An NSI provides the network characteristics which are required by Network slice blueprint is a complete description of the structure, configuration, and the plans/workflows for how to instantiate and control a network slice instance during its lifecycle. A network slice blueprint enables the instantiation of a network slice, which provides certain network characteristics (e.g., ultralow latency, ultrareliability, and value-added services for enterprises). A network slice blueprint refers to required physi- cal and logical resources and/or to sub-network blueprint(s). 5G Network Architecture Service Service Service Service Service Service instance 1 instance 2 instance 3 instance 4 instance 5 instance layer Network slice instance layer network network network network instance instance instance instance Resource layer Resource/network infrastructure/network functions Figure 1.16 Network slicing conceptual architecture [6]. a service instance. An NSI may also be shared across multiple service instances provided by the network operator. The NSI may be composed of zero or more sub-network instances, which may be shared by another NSI. Similarly, the sub-network blueprint27 is used to cre- ate a sub-network instance to form a set of NFs, which run on the physical/logical resources. The sub-network instance is a set of NFs, which run on the physical or logical resources. The network slice is a complete logical network providing telecommunications services and network capabilities. Network slices vary depending on the features of the ser- vice they need to support.
Fig. 1.17 shows an example of logical architecture of network slicing in a radio access net- work. In this example, different NSIs2 28 can either share the same functions or have dedicated functions. The new RAT features flexible air interface design and a unified medium access control (MAC) scheduling to support different network slice types. Such a combination allows time-domain and frequency-domain resource isolation without compromising resource efficiency. The protocol stack can be tailored to meet the diverse service requirements from different NSIs. For instance, layer-3 RRC functions can be customized in network slice design phase. Layer-2 can have various configurations for different NSIs to meet specific require- ments for radio bearers. In addition, layer-1 uses flexible numerology to support different Sub-network blueprint: a description of the structure (and contained components) and configuration of the sub-network instances and the plans/workflows for how to instantiate it. A sub-network blueprint refers to physical and logical resources and may refer to other sub-network blueprints. Network slice instance is the realization of network slicing concept. It is an end-to-end logical network, which comprises a group of network functions, resources, and connections. A network slice instance typi- cally covers multiple technical domains, which includes terminal, access network, transport network, and core network, as well as dual-connectivity domain that hosts third-party applications from vertical industries. Different network slice instances may have different network functions and resources. They may also share some of the network functions and resources. 40 Chapter 1 NSI 1 (eMBB) NSI 2 (URLLC) NSI 3 (mMTC) LTE slice NR slice common RRC common RRC LTE RRC NR RRC NR RRC NR RRC/PDCP/ Common PDCP NR PDCP/RLC LTE RLC NR RLC LTE scheduler Unified NR scheduler LTE MAC/PHY NR MAC/PHY NR MAC/PHY NR MAC/PHY Flexible shared radio resource Signaling link Traffic link Figure 1.17 Example of RAN architecture with
network slicing support [74]. network slice types. An NSI may contain different types of access nodes, such as 3GPP new RAT and a non-3GPP RAT. Consolidating fixed and wireless access in 5G is a desirable approach, which also requires further enhancements in the architecture design [74]. According to 3GPP specifications [3,16], a network slice always consists of an access and a core network part. The support of network slicing relies on the principle that traffic for dif- ferent slices is handled by different PDU sessions. Network can create different network slices by scheduling and also by providing different L1/L2 configurations. The UE should be able to provide assistance information for network slice selection in an RRC message. While the network can potentially support a large number of slices, the UE does not need to support more than eight slices in parallel. As we mentioned earlier, network slicing is a con- cept that allows differentiated network services depending on each customer requirements. The mobile network operators can classify customers into different tenant types each having different service requirements that control in terms of what slice types each tenant is autho- rized to use based on service-level agreement and subscriptions. Network slices may differ depending on the supported features and optimization of the net- work functions. An operator may opt to deploy multiple NSIs delivering exactly the same features but for different groups of UEs. A single UE can simultaneously be served by one or more NSIs via NG-RAN. A single UE may be served by a maximum of eight network slices at any time. The access and mobility management function (AMF) instance serving the UE logically belongs to each of the NSIs serving the UE, that is, this AMF instance is common 5G Network Architecture 41 Slice specific 5GC functions NSSAI 1 S-NSSAI X Control plane NSSAI 1, S-NSSAI X User plane Access node Slice specific 5GC functions S-NSSAI NSSAI 2, S-NSSAI y User plane Control plane User plane Control plane NSSA
I 2 Figure 1.18 Slice selection and identifiers [55]. to the NSIs serving a UE. The selection of the set of NSIs for a UE is triggered by the first associated AMF during the registration procedure typically by interacting with the network slice selection function (NSSF), which may lead to change of AMF. A network slice is identified by an identifier known as single-network slice selection assis- tance information (S-NSSAI). The S-NSSAI identity consists of a slice/service type (SST), which refers to the expected network slice behavior in terms of features and services and a slice differentiator (SD), which is an optional information element that complements the SST(s) to differentiate among multiple network slices of the same SST. The support of all standardized SST values by a public land mobile network (PLMN)2 29 is not required. The S- NSSAI can have standard values or PLMN-specific values. The S-NSSAI identifiers with PLMN-specific values are associated with the PLMN ID of the PLMN that assigns them. An S-NSSAI cannot be used by the UE in AS procedures in any PLMN other than the one to which the S-NSSAI is associated. Standardized SST values provide a way for establish- ing global interoperability for slicing SO that PLMNs can support the roaming use case more efficiently for the most commonly used SSTs. Currently, the SST values of 1, 2, and 3 are associated with eMBB, URLLC, and mMTC slice types [16]. The NSSAI is a collection of S-NSSAI. There can be at most eight S-NSSAIs in the NSSAI sent in signaling messages between the UE and the network. Each S-NSSAI assists the network in selecting a particular NSI. The same NSI may be selected via different S-NSSAIs. NSSAI includes one or more S-NSSAIs. Each network slice is uniquely identified by an S-NSSAI [3] (see Fig. 1.18). Public land mobile network is a mobile wireless network that is centrally operated and administrated by an organization and uses land-based RF transceivers or base stations as network hubs. This term is generally used to refer to an operato
r network. 42 Chapter 1 NG-RAN supports differentiated handling of traffic for different network slices which have been preconfigured. The support of slice capabilities in terms of the NG-RAN functions (i.e., the set of NFs that comprise each slice) is implementation dependent. The NG-RAN supports the selection of the RAN part of the network slice by assistance information pro- vided by the UE or the 5GC which unambiguously identifies one or more preconfigured network slices in the PLMN. The NG-RAN supports policy enforcement between slices according to service-level agreements. It is possible for a single NG-RAN node to support multiple slices. The NG-RAN can apply the best RRM policy depending on the service agreements for each supported slice. The NG-RAN further supports QoS differentiation within a slice. For initial attach, the UE may provide assistance information to support the selection of an AMF and the NG-RAN uses this information for routing the initial NAS to an AMF. If the NG-RAN is unable to select an AMF using this information or the UE does not provide such information, the NG-RAN sends the NAS signaling to a default AMF. For subsequent accesses, the UE provides a Temp ID, which is assigned to the UE by the 5GC, to enable the NG-RAN to route the NAS message to the appropriate AMF as long as the Temp ID is valid. Note that the NG-RAN is aware of and can reach the AMF which is asso- ciated with the Temp ID. The NG-RAN supports resource isolation between slices via RRM policies and protection mechanisms that would avoid conditions such as shortage of shared resources in one slice resulting in under-service issues in another slice. It is possible to fully dedicate the NG- RAN resources to a certain slice. Some slices may be available only in certain parts of the network. Awareness in the NG-RAN of the slices supported in the cells of its neighbors may be beneficial for inter-frequency mobility in the connected mode. It is assumed that the slice configuration does not change within the UE's registratio
n area. The NG-RAN and the 5GC can manage service requests for a slice that may or may not be available in a given area. Admission or rejection of access to a slice may depend on certain factors such as sup- port for the slice, availability of resources, and support of the requested service by other slices. In the case where a UE is simultaneously associated with multiple slices, only one signaling connection is maintained. For intra-frequency cell reselection, the UE always tries to camp on the best cell, whereas for inter-frequency cell reselection, dedicated priorities can be used to control the frequency on which the UE camps. Slice awareness in NG-RAN is introduced at PDU session level by indicating the S-NSSAI corresponding to the PDU session in all signaling containing PDU session resource information. 5GC validates whether the UE is authorized to access a certain network slice. The NG-RAN is informed about all network slices for which resources are being requested during the initial context setup [3,15]. Resource isolation enables specialized customization of network slices and prevents adverse effects of one slice on other slices. Hardware/software resource isolation is up to implementa- tion; nevertheless, RRM procedures and service agreements determine whether each slice 5G Network Architecture 43 may be assigned to shared or dedicated radio resources. To enable differentiated handling of traffic for network slices with different service agreements, NG-RAN is configured with a set of different configurations for different network slices and receives relevant information, indi- cating which of the configurations applies to each specific network slice. The NG-RAN selects the AMF based on a Temp ID or assistance information provided by the UE. In the event that a Temp ID is not available, the NG-RAN uses the assistance information provided by the UE at RRC connection establishment to select the appropriate AMF instance (i.e., the information is provided after random access procedure). If such information
is not available, the NG-RAN routes the UE to a default AMF instance [3,15]. Enabling network slicing in 5G requires native support from the overall system architecture. As shown in Fig. 1.16, the overall architecture consists of three fundamental layers: the infra- structure layer, network slice layer, and network management layer. The infrastructure layer provides the physical and virtualized resources, for instance, computing resource, storage resource, and connectivity. The network slice layer is located above the infrastructure layer and provides necessary NFs, tools and mechanisms to form end-to-end logical networks via NSIs. The network management layer contains the generic BSS/OSS and network slice man- agement (NSM) system, which manages network slicing and ensures satisfaction of the SLA requirements. The overall architecture has the following key features: Common infrastructure: Network slicing is different than a dedicated network solu- tion that uses physically isolated and static network resources to support tenants. Network slicing promotes the use of a common infrastructure among tenants oper- ated by the same operator. It helps to achieve higher resource utilization efficiency and to reduce the service time to market. Moreover, such design is beneficial for long-term technology evolution as well as for maintaining a dynamic ecosystem. On-demand customization: Each technical domain in an NSI has different customization capabilities, which are coordinated through the NSM system during the process of net- work slice template (NST) design, NSI deployment, and operation and management. Each technical domain can perform an independent customization process in terms of design schemes to achieve an effective balance between the simplicity needed by com- mercial practice and architectural complexity. Isolation: The overall architecture supports the isolation of NSIs, including resource iso- lation, operation and management isolation, and security isolation. The NSIs can be either physically or logically i
solated at different levels. Guaranteed performance: Network slicing seamlessly integrates different domains to satisfy industry-defined 5G performance specifications and to accommodate vertical industry requirements. Scalability: Owing to virtualization, which is one of the key enabling technologies for network slicing, resources occupied by an NSI can dynamically change. 44 Chapter 1 Operation and management capability exposure: Tenants may use dedicated, shared, or partially shared NSIs. Furthermore, different tenants may have independent operation and management demands. The NSM system provides access to a number of operation and management functions of NSIs for the tenants, which for instance allows them to configure NSI-related parameters such as policy. Support for multi-vendor and multi-operator scenarios: Network slicing allows a single operator to manage multiple technical domains, which may be composed of network elements supplied by different vendors. In addition, the architecture needs to support a scenario, where the services from the tenants may cover different administrative domains owned by different operators. As we mentioned earlier, an NSI is a managed entity in the operator's network with a life cycle independent of the life cycle of the service instance(s). In particular, service instances are not necessarily active through the entire duration of the run-time phase of the supporting NSI. The NSI life cycle typically includes an instantiation, configuration and activation phase, a run-time phase, and a decommissioning phase. During the NSI life cycle the opera- tor manages the NSI. As shown in Fig. 1.19, the network slice life cycle is described by a number of phases as follows [7]: Preparation: In this phase, the NSI does not exist. The preparation phase includes the creation and verification of NST(s), on-boarding, preparing the necessary network envi- ronment which is used to support the life cycle of NSIs, and any other preparations that are needed in the network. Instantiation, configura
tion, and activation: During instantiation/configuration, all shared/dedicated resources associated with the NSI have been created and configured and the NSI is ready for operation. The activation step includes any actions that make the NSI active such as routing traffic to it, provisioning databases (if dedicated to the network slice, otherwise this takes place in the preparation phase), and instantiation, configuration, and activation of other shared and/or non-shared NF(s). Run-time: In this phase, the NSI is capable of traffic handling and supports certain types of communication services. The run-time phase includes supervision/reporting, as well as activities related to modification. Modification of the workflows related to runtime tasks may include upgrade, reconfiguration, NSI scaling, changes of NSI capacity, changes of NSI topology, and association and disassociation of NFs with NSI. Decommissioning: This step includes deactivation by taking the NSI out of active state as well as the retrieval of dedicated resources (e.g., termination or reuse of NFs) and configuration of shared/dependent resources. Following this phase, the NSI does not exist anymore. An NSI is complete in the sense that it includes all functionalities and resources necessary to support certain set of communication services. The NSI contains NFs belonging to the access and the core networks. Lifecycle of a network slice instance Preparation Instantiation, configuration, and activation Run-time Decommissioning Supervision Design Pre-provision Instantiation/ Activation Modification De-activation Termination configuration Network environment preparation Reporting Figure 1.19 Life cycle phases of an NSI [7] 46 Chapter 1 If the NFs are interconnected, the 3GPP management system contains the information rele- vant to connections between these NFs including the topology of connections, and individ- ual link requirements such as QoS attributes. The NSI is realized via the required physical and logical resources. A network slice is described by
an NST. The NSI is created using the NST and instance-specific information. The concept of network slice subnet instance man- agement is introduced for the purpose of NSI management. For example, for instantiation of an NSI that contains radio access and core network components, these components can be defined and instantiated as two NSSIs denoted by NSSI1 (RAT1) in RAN and NSSI3 in the core network. The targeted NSI will be instantiated by combining the NSSI1 and NSSI3. Another NSI can be instantiated by combining NSSI3 with another RAN NSSI denoted by NSSI2 (RAT2). Depending on the communication service requirements, a communication service can use an existing NSI or trigger the creation of a new NSI. The new NSI may be created exclu- sively for this communication service or it may be created to support multiple communica- tion services with similar network slice requirements. The life cycle of a communication service is related but not dependent on an NSI. The NSI may exist before the communica- tion service uses the NSI and may exist after the communication service stopped using the NSI. An NSI can be created using one or more existing NSSI(s) or initiate the creation of one or more new NSSI(s) depending on the NSI requirements. The new NSSI(s) may be created just for this NSI or it may be created to support multiple NSIs. The life cycle of an NSI is related but not dependent on that of an NSSI. The NSSI may exist before the NSI is created and may exist after the NSI is no longer needed. 1.1.5 Heterogeneous and Ultra-dense Networks Effective network planning is essential to support the increasing number of mobile broad- band data subscribers and bandwidth-intensive services competing for limited radio resources. Network operators have addressed this challenge by increasing the capacity through new spectrum, multi-antenna techniques, and implementing more efficient modula- tion and coding schemes. However, these measures are not adequate in highly populated areas and at the cell edges where performance can sig
nificantly degrade. In addition to the above remedies, the operators have integrated small cells into their macro-networks to effi- ciently distribute network loading, and to maintain performance and service quality while reusing spectrum more efficiently. One solution to expanding an existing macro-network, while maintaining it as homogeneous, is to add more sectors per base station or deploying more macro-base stations. However, reducing the site-to-site distance in the macro-cell layout can only be pursued to a certain extent because finding new macro-sites becomes increasingly difficult and can be expensive, especially in dense urban areas. An alternative is to introduce small cells through addition of low-power 5G Network Architecture 47 access nodes or RRH to the existing [overlaid] macro-cells due to their more economical site acquisition and equipment installation. Small cells are primarily added to increase capacity in hotspots with high user demand and to fill coverage holes in the macro-network in both outdoor and indoor environments. They also improve network performance and service quality by off- loading the overlaid macro-cells. The result is a heterogeneous network topology with large macro-cells in conjunction with small cells providing increased capacity per unit area. Heterogeneous network planning dates back to GSM era where cells were separated through the use of frequency reuse. While this approach could still be taken in LTE, LTE networks pri- marily use a frequency reuse of one to maximize utilization of the licensed bandwidth. In het- erogeneous networks, the cells of different sizes are referred to as macro-cells, micro-cells, pico-cells, and femto-cells in the order of decreasing transmit power. The actual cell size depends not only on the access node power but also on physical antenna positions, as well as the topology of the cells and propagation conditions. In general, the small cells in an ultra-dense network (UDN) are classified into full-functional base stations (pico-cells and fe
mto-cells) and macro-extension access points (relays and RRHs). A full-functional base station is capable of performing all functions of a macro-cell with a lower power in a smaller coverage area and encompasses the full RAN protocol stack. On the other hand, a macro-extension access node is an extension of a macro-cell to effec- tively increase the signal coverage, and it performs all or some of the lower protocol layer functions. Moreover, the small cells feature different capabilities, transmission powers, cover- age, and deployment scenarios. The UDN deployment scenarios introduce a different cover- age environment where any given user would be in close proximity to many cells. Small-cell architectures using low-power nodes were considered promising to mitigate the substantial increase in network traffic, especially for hotspot deployments in indoor and out- door scenarios. A low-power node generally means a node whose transmit power is lower than the corresponding macro-node and base station classes, for example, pico-cell and femto-cell access nodes. Small-cell enhancements for LTE focused on additional functional- ities for improved performance in hotspot areas for indoor and outdoor using low-power nodes. Network architectures comprising small cells (of various types including non-3GPP access nodes) and macro-cells can be considered as practical realization of the heteroge- neous networks. Increasing the density of the 3GPP native or non-indigenous small cells overlaid by macro-cells constitutes what is considered as UDNs. Small cells can be deployed sparsely or densely with or without overlaid macro-cell cover- age as well as in outdoor or indoor environments using ideal or non-ideal backhaul. Small cells deployment scenarios include small-cell access nodes overlaid with one or more macro-cell layer(s)30 in order to increase the capacity of an already deployed cellular Note: 3GPP uses the term layer in this context to refer to different radio frequencies or different component carriers. 48 Chapter 1 Tabl
e 1.1: Backhaul options for the small cells [12]. Backhaul Type Backhaul Technology One-Way Latency (ms) Throughput (Mbps) Nonideal Fiber access 1 10-30 10-10,000 Fiber access 2 100-1000 Fiber access 3 50-10,000 DSL access 15-60 10-100 Cable 25-35 10-100 Wireless backhaul 10-100+ Ideal Fiber access 4 <2.5 us Up to 10,000 network. In this context, two scenarios can be considered: (1) UE is simultaneously in COV- erage of both the macro-cell and the small cell(s); and (2) UE is not simultaneously covered by the macro-cell and small cell(s). The small-cell nodes may be deployed indoors or out- doors, and in either case could provide service to indoor or outdoor UEs, respectively. For an indoor UE, only low UE speeds of 0-3 km/h are considered. For outdoor UEs, in addi- tion to low UE speeds, medium UE speeds up to 30 km/h were targeted. Throughput and mobility (seamless connectivity) were used as performance metrics for low- and medium- speed mobility scenarios. Cell-edge performance (e.g., fifth percentile of user throughput CDF) and network/UE power efficiency were also used as evaluation metrics. Backhaul is a critical component of heterogeneous networks, especially when the number of small nodes increases. Ideal backhaul, characterized by very high throughput and very low latency link such as dedicated point-to-point connection using optical fiber; and non-ideal backhaul (e.g., xDSL, 31 microwave backhaul, and relaying) for small cells were studied by 3GPP, and performance and cost trade-offs were made. A categorization of ideal and non- ideal backhaul based on operators' data is listed in Table 1.1. 3GPP conducted extensive studies to investigate the interfaces between macro and small cell, as well as between small cells considering the amount and type of information/signal- ing needed to be exchanged between the nodes in order to achieve the desired performance improvements; and whether a direct interface should be used between macro and small cells or between small cells. In those studies, LTE X2 interface (i
.e., inter-eNB interface) was used as a starting point. In small cell enhancements, both sparse and dense small cell deployments were considered. In some scenarios (e.g., indoor/outdoor hotspots), a single or multiple small-cell node(s) are sparsely deployed to cover the hotspot(s), whereas in other Digital subscriber line is a family of technologies that are used to transmit digital data over telephone lines. The asymmetric digital subscriber line is the most commonly used type of digital subscriber line technology for Internet access. The xDSL service can be delivered simultaneously with wired telephone service on the same telephone line since digital subscriber line uses high-frequency bands for data transmission. On the cus- tomer premises, a digital subscriber line filter on each non-digital subscriber line outlet blocks any high- frequency interference to enable simultaneous use of the voice and digital subscriber line services. 5G Network Architecture 49 scenarios (e.g., dense urban, large shopping mall, etc.), a large number of small-cell nodes are densely deployed to support high-volume traffic over a relatively wide area covered by the small-cell nodes. The coverage of the small-cell layer is generally discontinuous between different hotspot areas. Each hotspot area is typically covered by a group of small cells or a small cell cluster. Furthermore, future extension or scalability of these architec- tures is an important consideration. For mobility or connectivity performance, both sparse and dense deployments were studied with equal priority. Network synchronization is an important consideration where both synchronized and unsyn- chronized scenarios should be considered between small cells as well as between small cells and macro-cell(s). For specific operational features such as interference coordination, carrier aggregation, and coordinated multipoint (CoMP) transmission/reception, small-cell enhance- ments can benefit from synchronized deployments with respect to small-cell search/mea- surements and
interference/resource management procedures. Small-cell enhancements tried to address deployment scenarios in which different frequency bands are separately assigned to macro-layer and small-cell layer. Small-cell enhancements are applicable to the existing cellular bands and those that might be allocated in the future with special focus on higher frequency bands, for example, the 3.5 GHz band, to take advantage of more available spec- trum and wider bandwidths. Small-cell enhancements have considered the possibility of fre- quency bands that, at least locally, are only used for small cell deployments. The studies further considered cochannel deployment scenarios between macro-layer and small-cell layer. Some example spectrum configurations may include the use of carrier aggregation in the macro-layer with bands X and Y and the use of only band X in the small-cell layer. Other example scenarios may include small cells supporting carrier aggregation bands that are cochannel with the macro-layer; or small cells supporting carrier aggregation bands that are not cochannel with the macro-layer. One potential cochannel scenario may include deploy- ment of dense outdoor cochannel small cells, including low-mobility UEs and non-ideal backhaul. All small cells operate under a macro-cell coverage irrespective of duplex schemes [frequency division duplex/time division duplex (FDD/TDD)] that are used for macro-layer and small-cell layer. Air interface and solutions for small cell enhancement are band- independent [12]. In a small cell deployment, it is likely that the traffic volume and the user distribution are dynamically varying between the small-cell nodes. It is also possible that the traffic is highly asymmetrical and it is either downlink or uplink centric. Both uniform and nonuniform traffic and load distribution in time-domain and spatial-domain are possible. During performance modeling, both non-full-buffer and full-buffer traffic were considered, where non-full buffer traffic was prioritized as it was deemed to b
e more practical representa- tion of user activity. Small-cell enhancements target high network energy efficiency and a reasonable system complexity. The small cells can save energy by switching to a dormant mode due to increased likelihood of periods of low or no user activity during operation. The trade-off 50 Chapter 1 between user throughput/capacity per unit area and network energy efficiency is an impor- tant consideration for small cell deployments. The small cells can further achieve UE energy efficiency considering the small cell's short-range transmission path, resulting in reduced energy/bit for the uplink transmission, mobility measurements, cell identification, and small cell discovery. Given that some of the heterogeneous network deployments include user-installed access nodes in indoor environments, a self-organizing mechanism for deployment and operation of the small cell without direct operator intervention is required. 3GPP self-organizing net- works (SON) solutions aim to configure and optimize the network automatically, SO that the intervention of human can be reduced and the capacity of the network can be increased. These solutions can be divided into three categories [85]: Self-configuration: This is the dynamic plug-and-play configuration of newly deployed access node. As shown in Fig. 1.20, the access node configures its physical cell identity (PCI), transmission frequency, and power, leading to faster cell planning and rollout. eNB power on IP address configuration Gateway association Basic setup Authentication Software and configuration data download Neighbor list configuration Initial radio configuration Configuration of coverage parameters Neighbor list optimization Optimization Optimization of coverage and capacity Failure detection and localization Self-healing Healing scheme Figure 1.20 SON framework [85]. 5G Network Architecture 51 The network interfaces (e.g., S1 and X2 in the case of LTE) are dynamically config- ured, and the IP address as well as connection to IP backhaul is est
ablished. To reduce manual operation, the automatic neighbor relation (ANR) scheme is used. The ANR configures the neighbor list in newly deployed access nodes and optimizes the list over the course of operation. Dynamic configuration includes the configuration of the physical layer identifier, PCI, and cell global ID (CGID). The PCI mapping attempts to avoid assignment of duplicate identifiers to the access nodes in order to prevent collision. The PCI can be assigned in a centralized or distributed manner. When centralized assignment is used, the operation and management system will have a complete knowledge and control of the PCIs. When the distributed solution is used, the operation and management system assigns a list of possible PCIs to the newly deployed access nodes, but the adoption of the PCI is in control of the eNB. The newly deployed eNB will request a report, sent either by UEs over the air inter- face or by other eNBs over the X2 interface, including already in-use PCIs. The eNB will randomly select its PCI from the remaining values. The ANR is used to mini- mize the work required for configuration in newly deployed eNBs as well as to opti- mize configuration during operation. Correct and up-to-date neighbor lists will increase the number of successful handovers and minimize the number of dropped calls. Before a handover can be executed, the source eNB requires the neighbor infor- mation such as PCI and CGID of the target eNB. The PCI is included in normal mea- surement reports. The mapping between the PCI and CGID parameters can be done by using information from the operation and management or that reported by UEs decoding the target cell CGID on the broadcast channel in the target cell. The capa- bility of decoding CGID is an optional UE feature. A network operator can put a cell on an ANR black list, to block certain handover candidates, for example, from indoor to outdoor cells. 3GPP has also specified LTE inter-frequency and inter-RAT ANR. Self-optimization: The self-optimization functions were
mainly specified in 3GPP Rel-9, which included optimization of coverage, capacity, handover, and interference. Mobility load balancing is a feature where cells experiencing congestion can transfer excess load to other cells which have available resources. Mobility load balancing fur- ther allows the eNBs to exchange information about the load level and the available capacity. The report can contain computational load, S1 transport network load, and radio resource availability status. There are separate radio resource status reports for the uplink and downlink, which may include the total allocated resources, guaranteed bit rate (GBR) and non-guaranteed bit rate traffic statistics, the percentage of allocated physical resources relative to total resources, and the percentage of resources available for load balancing. Mobility load balancing can also be used across different radio access technologies. In case of inter-RAT, the load reporting RAN information man- agement protocol will be used to transfer the information via the core network between the base stations of different radio technologies. A cell capacity class value, 52 Chapter 1 set by the operation and management system, is used to relatively compare the capaci- ties of different radio access technologies. A handover due to load balancing is per- formed as a regular handover; however, it may be necessary to set the parameters such that the UE cannot return to the [congested] source cell. Mobility robustness optimization is a solution for automatic detection and correction of errors in the mobility configuration which may cause radio link failure as a result of unsuccessful handover. Self-healing: Features for automatic detection and removal of failures and automatic adjustment of parameters were mainly specified in 3GPP Rel-10. Coverage and capacity optimization enables automatic correction of capacity problems due to variations of the environment. The minimization of drive tests was a feature that enables normal UEs to provide the same type of informat
ion as those collected during the drive tests with the advantage that UEs can further retrieve and report parameters from indoor environments. The home eNB (HeNB) concept was introduced in LTE Rel-9, which defines a low-power node primarily used to enhance indoor coverage. Home eNBs and particularly femtocells are privately owned and deployed without coordination with the macro-network; as such, if their operating frequency is the same as the frequency used in the macro-cells and access to them is limited, then there is a risk of interference between the femtocell and the surrounding network. The use of different cell sizes with overlapping coverage and creation of a heterogeneous network adds to the complexity of network planning. In a net- work with a frequency reuse of one, the UE normally camps on the cell with the strongest received signal power, hence the cell edge is located at a point where the received signal strengths are the same in both cells. In homogeneous network deployments, this also typi- cally coincides with the point of equal path loss of the uplink in both cells, whereas in a heterogeneous network, with high-power nodes in the large cells and low-power nodes in the small cells, the point of equal received signal strengths is not necessarily the same as that of equal uplink path loss. Therefore, a challenge in heterogeneous network planning is to ensure that the small cells actually serve certain number of users. This can be done by increasing the area served by the small cell through the use of a positive cell selection offset which is referred to as cell range extension (Fig. 1.21). A drawback of this scheme is the increased interference in the downlink experienced by the UE located in the extended cell region and served by the base station in the small cell. This effect may impact the quality of reception of the downlink control channels. It is important to high- light that indoor small cells (femtocells) operate in three different access modes: open, closed, and hybrid. In open access mode
, all subscribers of a given operator can access the node, while in closed access mode, the access is restricted to a closed subscriber group. In hybrid mode, all subscribers can connect to the femtocell with the priority always given to the designated subscribers. A network comprising small cells and 5G Network Architecture Uplink cell Downlink Downlink received signal power by macro-eNB cell edge Downlink received signal ((p)) Macro-cell eNB power by pico-eNB Pico-cell eNB Uplink received signal Uplink received signal power by pico-eNB power by macro-eNB UE location Figure 1.21 Uplink/downlink imbalance issue in HetNet deployments [12]. macro-cells is referred to as HetNet in the literature. HetNets, in general, are considered as a paradigm shift from the classic homogeneous networks. A number of features were added to the later releases of LTE that can be used to mitigate the inter-cell interference issue in the heterogeneous networks. Inter-cell interference cancel- ation (ICIC) was introduced in LTE Rel-8, in which the eNBs can coordinate over the X2 interface in order to mitigate inter-cell interference for UEs at the cell edge. Frequency- domain ICIC scheme evolved to enhanced ICIC (eICIC) in LTE Rel-10 where the time- domain ICIC was added through the use of almost blank subframes. Those subframes included only LTE control channels and cell-specific reference signals and no user data, transmitted with reduced power. In that case, the macro-eNB would transmit the almost blank subframes according to a semistatic pattern and the UEs in the extended range of the small cells could better receive downlink control and data channels from the small cell. Further enhancement of ICIC focused on interference handling by the UE through ICIC for control signals, enabling even further cell range extension. Carrier aggregation (CA) was introduced in LTE Rel-10 to increase the total system band- width and the maximum user throughputs. In this scheme, the component carriers (CCs) are aggregated and any CA-capable UE can be
allocated resources on all or some component carrier combinations. Cross-carrier scheduling is an important feature in heterogeneous net- works supporting CA where the downlink control channels are mapped to different compo- nent carriers in the large and small cells (as shown in Fig. 1.22). As an example, when LTE downlink control channel which carries downlink control information along with scheduling 54 Chapter 1 Frequency Frequency SCC PCC SCC PCC PCC: Primary component carrier SCC: Secondary component carrier Control channels Pico-cell eNB Pico-cell UE (secondary cell) Extended range Macro-cell Macro-cell eNB (primary cell) Figure 1.22 Illustration of cross-carrier scheduling and multi-cell operation in LTE [69]. information must be received by the UEs at the cell edge, it may be transmitted with a high- er power than the traffic channels. Therefore, using different carriers for the downlink con- trol channels in the large and small cells reduces the risk of inter-cell interference. The latest releases of LTE allow multi-carrier operation with different timing advances, enabling combination of component carriers from macro-eNBs with those of small cells. The CoMP transmission and reception feature, introduced in LTE Rel-11, is an inter-cell and inter-user interference mitigation scheme that allows coordinated eNBs, relay nodes, or RRHs to simultaneously transmit to or receive from a UE. When CoMP is used in a hetero- geneous network, a number of macro-cells and small cells can participate in data transmis- sion to and from a UE. However, this would require that the macro-eNBs and the small cells to be synchronized and coordinated via backhaul links and the content be available almost at the same time at multiple transmission points, which is practically challenging. In the next section, we will see how cloud-RANs with distributed RRHs have made multi- point transmission/reception and data processing practically feasible. In CoMP scheme, a dynamic muting mechanism can be exploited to mute some (radio) resour
ces at certain small cells for the benefit of other small cells. In this manner, the inter-cell coordination function which generates the muting patterns considers an appropriate metric for individual users and a proportional fairness scheduling among all users. The muting technique not 5G Network Architecture 55 only takes into account the first dominant interferer but also the second dominant interfer- ence source for optimal interference mitigation. Owing to the network traffic load fluctuation, switching off the base stations/access nodes in the cells with low or no traffic load is an essential method for UDNs to improve energy effi- ciency and to reduce inter-cell interference. In practice, the network load fluctuates over dif- ferent times and locations due to diversity of user behavior and mobility, which is especially true for UDNs that warrants switching off under-utilized base stations. In a sparse network using frequency reuse of one with idle base stations where access node density is less than user density, the average spectral efficiency can still increase linearly with access node den- sity as in the network without idle mode base stations. In a frequency reuse of one UDN with idle mode base stations where access node density is larger than user density, the spectral effi- ciency only increases logarithmically with access node density [68]. Optimal utilization of large amount of radio resources in a UDN can become increasingly com- plex. Improper allocation of abounded radio resources in a UDN can lead to higher inter-cell interference, unbalanced load distributions, and higher power consumption. Furthermore, due to inter-cell interference, local radio resource allocation strategies may have a global impact on a UDN operation. In other words, a localized allocation strategy may not work in a UDN envi- ronment, which necessitates the use of a centralized RRM that has a holistic view of the UDN, allowing tight interworking across the network. Providing sufficient bandwidth over direct- wired backhaul
backhauling, which consumes valuable radio resources and may cause additional interference and latency. Recall that a UDN is a densified HetNet. The user association in HetNets follows a load-based association rule, where the users are biased to connect to the nearest small cell to offload their traffic. The small cells are usually lightly loaded due to the limited coverage area; hence, the association of a given user to the nearest small cell gives the user a higher data rate privilege. The biasing of users to small cells is performed via virtual extension of their coverage area. Interference management is a challenging task in densified networks. Various types of small cells are deployed with large densities to provide the users with very high throughput con- nections. The use of inter-cell coordination to mitigate the interference requires increasing sig- naling overhead due to the large number of deployed small cells. Thus, distributed control is preferred to mitigate the interference in a UDN. A UDN can be defined as a network where there are more cells than active users. In mathe- matical terms, PBS Puser' where PBS denotes the area density of access nodes and Puser denotes the area density of users. Another definition of UDN can be solely given in terms of the access node density irrespective of the user density. The access nodes in UDN envir- onments are typically low-power small cells with a small footprint, resulting in a small COV- erage area. Accordingly, the inter-site distance would be in the range of meters or tens of meters. Strong interference between neighboring cells is a limiting factor in UDN. The 56 Chapter 1 proximity of the small cells to each other in a UDN environment causes strong interference, thus the use of effective interference management schemes is inevitable to mitigate the interference of neighboring cells. Densification of wireless networks can be realized either by deploying an increasing number of access nodes or by increasing the number of links per unit area. In the first a
pproach, the densification of access nodes can be realized in a distributed manner through deployment of small cells (e.g., pico-cells or femto-cells) or via a centralized scheme using distributed antenna system (DAS) 32 in the form of C-RAN archi- tecture. In small-cell networks, femtocells are typically installed by the subscribers to improve the coverage and capacity in residential areas, and the pico-cells are installed by the operators in hotspots. Thus, in small-cell networks the coordination mechanism is often distributed. Compared to relays, DAS transmits the user signals to the base station via fiber links, while the relays use the wireless spectrum either in the form of in-band or out-of- band. 1.1.6 Cloud-RAN and Virtual-RAN Operators in quest of more efficient ways to accommodate the increasing use of smart- phones and other heavy data-consuming wireless devices in their networks face a dilemma when it comes to expanding network capacity and coverage. Optical fiber is typically the first option which is considered when addressing the problem of exponential traffic growth in the network. However, optical fiber is expensive; it takes a long time to install; and in some locations, it cannot be installed. To improve the network capacity and cover- age, operators have several options among them small cells, carrier Wi-Fi, and DASs. These and a host of other solutions are being used by network operators as methods of expanding their network to accommodate the exponential user traffic and new applications. A distributed antenna system is a network of spatially separated antennas which are connected to a common source via a transport mechanism that provides wireless service within a geographic area or structure. Distributed antenna system improves mobile broadband coverage and reliability in areas with heavy traffic and enhances network capacity, alleviating pressure on wireless networks when a large group of people in close proximity are actively using their terminals. Distributed antenna system is an approa
ch to extending outdoor base station signals in indoor environments. It is a network of geographically separated antennas which receive input from a common base station source. Distributed antenna system uses multiple smaller antennas to cover the same area (that otherwise the macro base station would cover) and provides deeper penetration and coverage inside buildings. The RF input to the antennas can be conveyed either by lossy coaxial cables or more expensive optical fiber links. Some in-building distributed antenna systems can sup- port multiple operators and standards at various levels, but advanced equipment is needed to meet a wider range of frequency bands and power outputs. Unwanted signal by-products and interference are serious issues in a shared distributed antenna system environment. Carrier Wi-Fi provides improved, scalable, robust unlicensed spectrum coverage and is often deployed as a stand-alone solution. It is an easy data offload from the cellular networks with access and policy control capable of supporting large numbers of users. Wi-Fi with new standards, such as Hotspot 2.0, can provide high data rates for users who are continuously streaming content on mobile devices. 5G Network Architecture In conjunction with the question of network expansion, there are other business imperatives. Mobile data transport architectures must be evaluated based on characteristics such as fast- ness, time to market, cost-effectiveness, operational and architectural simplicity, expandabil- ity, and flexibility. Energy consumption and physical size are also key factors in the deployment of new network architectures considering power and space are expensive and scarce resources at base station sites and central offices (COs). A centralized-RAN, or C-RAN architecture addresses capacity and coverage issues, while supporting mobile fronthaul and/or backhaul solutions as well as network self-organization, self-optimization, configuration, and adaptation with software control and management through SDN and NFV. Cloud-R
AN also provides advantages in controlling ongoing opera- tional costs, improving network security, network controllability, network agility, and flexi- bility. The application of the C-RAN concept to small-cell architectures provides capacity benefits beyond those achieved through cell virtualization. In a traditional small-cell archi- tecture, each access point provides a fixed amount of capacity within its coverage area. This might work well only if the user traffic is evenly distributed across the coverage area, a con- dition that is rarely happens in real life. The result is that some access points will be over- loaded and others are relatively idle across time. Unlike stand-alone small cells where the addition of the new cells further aggravates the inter-cell interference, C-RAN architectures can be expanded and scaled. 1.1.6.1 Architectural Aspects Cloud-based processing techniques can be implemented to centralize the baseband proces- sing of multiple small cells and to improve inter-cell mobility and interference management. Small cells can support a variety of applications and services including voice-over-IP and videoconferencing, which can greatly benefit from a centralized architecture. C-RAN archi- tecture comprises distributed RRHs commonly connected to centralized BBUs using optical transport or Ethernet links. The RRHs typically include the radio, the associated RF ampli- fiers/filters/mixers, and the antenna. The centralized BBU is implemented separately and performs the signal processing functionalities of the RAN protocols. The centralized BBU model enables faster service delivery, cost savings, and improved coordination of radio capabilities across a set of RRHs. Fig. 1.23 shows the migration from distributed RAN architecture to the centralized model. In a V-RAN architecture, the BBU functionalities and services are virtualized in the form of VMs running on general-purpose processor platforms that are located in a centralized BBU pool in the CO that can effectively manage on- demand resource
allocation, mobility, and interference control for a large number of inter- faces [toward remote radio units (RRUs)] using programmable software layers. The V-RAN architecture benefits from software-defined capacity and scaling limits. It enables selective content caching, which helps to further reduce network deployment and maintenance costs as well as to improve user experience based on its cloud infrastructure. Aggregator Cloud BBU Aggregator Switch Aggregator Cloud RAN architecture with Distributed RAN architecture Cloud RAN architecture with virtualized EPC virtualized BBU and EPC Figure 1.23 Base station architecture evolution and high-level C-RAN architecture [ 5G Network Architecture 59 In a traditional cellular network architecture, each physical base station unit encompasses both baseband and radio processing functions. In C-RAN, the baseband processing for a large number of cells is centralized, resulting in improved performance due to the ability to coordinate among multiple cells, and cost reduction as a result of pooling the shared resources. Small cells, when densely deployed across a large indoor environment, create large areas of overlap between neighboring cells. Inter-cell interference occurs at the cell boundaries. Some enterprise small cells use a central service controller to assist in hand- overs and backhaul aggregation, but it cannot overcome the fact that each cell interferes with its neighbors since some level of interference is inevitable. Creating multiple indepen- dent cells further necessitates frequent handovers for mobile terminals, degrading the user experience and creating the potential for handover failures or constant back-and-forth hand- overs between adjacent cells, a phenomenon known as ping-pong effect. Cell virtualization enables allocation of users' data over the same radio resources but sent to different access nodes for transmission to different users. In general, a C-RAN architecture consists of three main entities: BBU(s), RRH/units, and the transport network or what
is called the fronthaul, as shown in Fig. 1.23. In order to reduce the power consumption across network and to reduce inter-cell interference, the C-RAN architecture allows shutting off idle RRHs. This flexibility provides network adap- tation based on traffic profiles that vary temporally and/or geographically that further reduces the interference to neighboring cells in order to optimize overall system perfor- mance. From a hardware usage perspective, scheduling baseband resources on-demand to perform communication process for multiple RATs while taking advantage of network vir- tualization techniques can improve operational efficiency and reduce energy consumption. CoMP, dual connectivity, virtual multiple input and multiple output (MIMO), and coordi- nated beamforming are effective standard approaches to improve network capacity specified by 3GPP [47]. To support CoMP schemes with joint transmission/processing, the network configures the UEs to measure channel-state information and to periodically report the mea- surements to a set of collaborating nodes, which results in a large amount of signaling over- head. The logical interface between various RATs in a centralized BBU model would enable signaling exchange among participating nodes and node selection for collaborative transmission and reception. Since most of the current and the future deployment scenarios are in the form of ultradense heterogeneous networks, multi-RAT coexistence will be a key issue. 3GPP LTE was designed to support inter-RAT handover for GSM, UMTS, and interworking with wireless local area network. As shown in Fig. 1.24, the architecture for interworking between LTE and different RATs requires connection of their core networks for higher layer signaling. This multi-RAT architecture might be feasible but the latency of an inter-RAT handover may be prohibitive for many applications and services. In order to improve the user experience, simplified net- work architectures have been studied for lower latency based on C-RAN concept. If the
BBU 60 Chapter 1 Unified core network 2G/3G core LTE core network network (EPC) (GGSN, SGSN) Migration from distributed to centralized network Cloud BBU|pool 2G base 2G RRH station LTE base LTE RRH LTE base station (eNB) LTE RRH station (eNB) 3G base 3G RRH station (NB) Control plane User plane Figure 1.24 Mobility management in multi-RAT scenarios in distributed and centralized RAN architectures [61]. pool, shown in Fig. 1.24, integrates the essential 2G, 3G, and LTE [and later 5G NR (new radio)] network protocols, the inter-RAT handover process can be simplified and the distance between traffic anchor and interface can be reduced, which ultimately reduces the handover interruption time for network services. The inter-RAT handover differs from intra-RAT hand- over, which is a procedure to select a cell with the strongest received signal. The objective of inter-RAT handover is to select a suitable cell, considering user needs (especially mobility prediction), traffic type, and network property and state, by switching the logical interface between different RATs, which is more manageable in a C-RAN architecture. Based on the distributed base station architectural model for the C-RAN, all or some of the baseband functions are performed in the centralized unit (CU). The processing resources on the CU can be dynamically managed and allocated. The C-RAN architecture allows improvement of resource utilization rate and energy efficiency as well as support of collabo- rative techniques. The concept of C-RAN has been evolving in the past decade and numer- ous architectural and deployment schemes for improving the spectral efficiency, latency, and support of advanced interference mitigation techniques have been studied and trialed by the leading operators. As an example, demarcation of the BBU into a CU and DU(s), func- tional split, and the next-generation fronthaul interface (NGFI) were introduced under the C-RAN context to meet the 5G requirements. The principle of CU/DU functional split origi- nated from the real-time
processing requirements of different applications. As shown in Fig. 1.25, a CU typically hosts non-real-time RAN protocols and functions offloaded from the core network as well as MEC services. Accordingly, a DU is primarily responsible for physical layer processing and real-time processing of layer-2 functions. In order to relax the 5G Network Architecture transport requirements on the fronthaul link between the DU and the RRUs, some physical layer functions can be relocated from the DU to the RRU(s). From the equipment point of view, the CU equipment can be developed based on a general-purpose platform, which sup- ports RAN functions, functions offloaded from core network, and the MEC services, whereas the DU equipment must be (typically) developed using customized platforms, which can support intensive real-time computations. Network function virtualization infra- structure allows system resources, including those in the CU and the DU, to be flexibly orchestrated via MANO, SDN controller, and traditional network operation and maintenance center, supporting fast service rollout. In order to address the transport challenges between CU, DU, and RRU(s), the NGFI standard has been developed where an NGFI switch net- work is used to connect the C-RAN entities. Using the NGFI standard, the C-RAN entities can be flexibly configured and deployed in various scenarios. In case of ideal fronthaul, the deployment of DU can also be centralized, which could subsequently support cooperative physical layers. In case of non-ideal fronthaul, the DU can be deployed in a fully or par- tially distributed manner. Therefore C-RAN architecture in conjunction with the NGFI Evolved packet core (EPC) 5G core (5GC) Separation Backhaul Backhaul Central unit plants L2/L3 Baseband processing unit (BBU) CU-DU interface (F1) Distributed unit (DU) Fronthaul Remote radio Remote radio Radio- head (RRH) head (RRH) Figure 1.25 BBU architecture and evolution to 5G [47]. 62 Chapter 1 standard can support CU and DU deployments [47]. Latency, cost, and
distance should be carefully evaluated in determining the proper mode of transport. Some of the available options include dedicated fiber, optical transport network (OTN), 34 passive optical network (PON), 35 microwave, and wavelength-division multiplexing (WDM) schemes. Mobile operators can further leverage low-cost, high-capacity fronthaul solutions using microwave E-band transport as an advanced application of C-RAN architecture. E-band radios are point-to-point, line-of-sight (LoS) microwave radios operating at 71-86 GHz. 1.1.6.2 Fronthaul Transport and Functional Split Options In a C-RAN architecture, the baseband signal processing is centralized and often moved from individual RRUs to the edge cloud, resulting in a simplified network architecture, smaller form-factor radio units, efficient sharing and use of network resources, reduced costs of equipment installation and site maintenance, and higher spectral efficiency gains from joint processing schemes such as CoMP transmission/reception. However, it is neces- sary to overcome the constraints of fronthaul network transporting the raw in-phase and quadrature (I/Q) digital radio samples from the radio units to the edge cloud for processing. The fronthaul network is traditionally implemented based on the CPRI standard, which cur- rently supports data rates of up to 24 Gbps per cell, a total fronthaul latency up to 200 us, low jitter, tight synchronization, and high reliability. These requirements can only be real- ized with high-capacity fiber or point-to-point wireless links, making the deployment of the fronthaul network very costly, reducing the gains expected from centralization. The most promising approach to reduce the traffic load of the fronthaul interface is through the func- tional split between the edge cloud BBU and the RRU(s). By adopting only partial radio protocol split, the fronthaul requirements can be significantly relaxed while retaining the main centralization benefits. These functional splits blur the difference between classical frontha
ul and backhaul networks, calling for converged transport networks that unify An optical transport network consists of a set of optical network elements connected by optical fiber links and is able to provide functionality of transport, multiplexing, routing, management, supervision, and surviv- ability of optical channels carrying user signals, according to the requirements given in ITU-T Recommendation G.872. A distinguishing characteristic of the optical transport network is its provision of transport for any digital signal independent of client-specific aspects, that is, client independence. As such, according to the general functional modeling described in ITU-T Recommendation G.805, the optical trans- port network boundary is placed across the optical channel/client adaptation, in a way to include the server specific processes and leaving out the client-specific processes. A passive optical network is a communication technology which is used to provide fiber links to the end users. One of the passive optical network's distinguishing features is that it implements a point-to-multipoint architecture where passive fiber-optic splitters are used to enable a single optical fiber to serve multiple end- points. A passive optical network does not have to provision individual fibers between the hub and customer. 5G Network Architecture 63 backhaul and fronthaul equipment (i.e., integrated access and backhaul ³6, hence reducing deployment and operational costs. The deployment of these networks can be facilitated by the introduction of NGFI. Centralizing baseband processing simplifies network management and enables resource pooling and coordination of radio resources. The fronthaul represents the transport net- work connecting the central site to cell sites when some or the entire baseband functions are hosted in a central site. Point-to-point dark fiber would be the ideal transport medium for fronthaul because of its high bandwidth, low jitter, and low latency. However, dark fiber is not widely available, thus there
has been a need to relax the fronthaul require- ments in order to enable the use of widely available transport networks such as packet- based Ethernet or E-band microwave. With such transport networks, bandwidth may be limited, jitter may be higher, and latencies may be on the order of several milliseconds. Cloud-RAN architecture is able to support different functional splits. Centralizing only a portion of the baseband functions and leaving the remaining functions at the remote sites would be a way to relax the fronthaul requirements. Depending on which functions are centralized, we will have different bandwidth, latency, and jitter requirements. In a C-RAN architecture, the RRHs are connected to the BBU pool through high-bandwidth transport links known as fronthaul. There are a few standard interface options between the RRH and BBU including CPRI, radio-over-Ethernet (RoE), and Ethernet. However, CPRI and its most recent version, eCPRI, are currently the most common technologies used by C-RAN equipment vendors. The fronthaul link is responsible for carrying the radio signals, typically over an OTN, using either digitized form based on protocols such as the CPRI, or in analog form through radio-over-fiber technology. The main advantage of digitized transmission is the reduced signal degradation, allowing data transmission over longer distances, offering higher degree of BBU centralization. The common fronthaul solution in C-RAN is to use dedicated fiber. However, centralization requires use of a large number of fiber links which are limited and expensive to deploy. Alternative solutions include the use of other transport technologies such as WDM and OTN, or even the transmission of fronthaul data wirelessly using micro- wave or mmWave frequency bands. CPRI imposes very strict requirements on the fronthaul network. These requirements make the fronthaul network very expensive to deploy, thereby offsetting the cost saving expected from C-RAN. It can therefore be argued that the fronthaul network could become the bo
ttleneck of 5G mobile networks. One of the potential technologies targeted to enable future cellular network deployment scenarios and appli- cations is the support for wireless backhaul and relay links enabling flexible and very dense deployment of new radio cells without the need for densifying the transport network proportionately. 64 Chapter 1 The data rates on the fronthaul links are substantially higher compared to the data rates on the radio interface due to CPRI I/Q sampling and additional control information. The cen- tralized BBU and the distributed RRHs exchange uncompressed I/Q samples; therefore, effi- cient compression schemes are needed to optimize such wideband transmission over capacity-constrained fronthaul links. Possible solutions include digital RF signal sampling rate reduction, use of nonlinear quantization, frequency-domain subcarrier compression, or I/Q data compression. The choice of the most suitable compression scheme is a trade-off between achievable compression ratio, algorithm and design complexity, computational delay, and the signal distortion it introduces, as well as power consumption. Reducing signal sampling rate is a low-complexity scheme with minimal impact on the protocols. Nonlinear quantization improves the signal-to-noise ratio (SNR). Logarithmic encoding algorithms such as u-law or A-law37 can also be used to achieve higher transport efficiency on the fronthaul links. Implementation of the orthogonal frequency division multiplexing (OFDM) processing blocks at the RRH allows further reduction in the required fronthaul capacity. CPRI requires a round-trip latency of 5 us, excluding propagation delay [75]. More impor- tantly, the total delay including propagation delay is limited by the air-interface hybrid auto- matic repeat request (HARQ) timing, since HARQ acknowledgments have to be received at the DL/UL transceiver within certain duration and baseband processing would take certain time depending on the air-interface technology. As a result, typically around 200 us is av
ail- able for total fronthaul latency. Assuming the speed of light of 200,000 km/s in fiber, CPRI maximum transmission distance is limited to approximately 20 km. In a fronthaul network which utilizes dark fiber, jitter rarely occurs between BBU and RRH, because this type of optical fiber rarely causes any jitter. On the contrary, in a fronthaul network containing active equipment like WDM or PON, jitter can be intro- duced to the fronthaul network during signal processing (e.g., mapping/multiplexing in OTN). CPRI I/Q bit streams with such jitter can cause errors in the clock and data recovery process at RRH, subsequently leading to degraded system performance of RRH. Degraded frequency accuracy of the reference clock recovered in RRH can affect the performance of all relevant components that use the reference clock. For example, an inaccurate reference clock may cause errors when converting LTE/NR I/Q samples into analog signals during the digital-to-analog conversion. It can further lead to inac- curate frequency of carrier signals used for radio transmission of the analog signals. Therefore jitter in the fronthaul network can cause significant impacts on the quality of A-law is a companding algorithm, which is used in European 8-bit PCM digital communications systems to modify the dynamic range of an analog signal for digitization. It is one of the versions of the ITU-T G.711 standard. The u-law algorithm is another companding algorithm, which is primarily used in 8-bit PCM digital telecommunication systems in North America and Japan. Companding algorithms reduce the dynamic range of an audio signal, resulting in an increase in the signal-to-noise ratio achieved during transmission. In the digital domain, it can reduce the quantization error. 5G Network Architecture 65 LTE/NR signals transmitted through RRH antennas. Therefore when implementing a fronthaul network for C-RAN, extensive verification is required to ensure jitter intro- duced by active equipment is maintained within the tolerable range [60]. The s
tringent latency requirements have kept the fronthaul interface away from packet- switched schemes such as Ethernet. However, with the exponential increase in bandwidth requirements for 5G networks, packet-based transport schemes cannot be disregarded. The economy of scale and the statistical multiplexing gain of Ethernet are essential for the new fronthaul transport. In cooperation with CPRI Forum, IEEE 802.1 took the task of defining a new fronthaul transport standard under the IEEE 802.1cm38 38 project. The project was enti- tled, Time-Sensitive Networking (TSN)35 for fronthaul, which defines profiles for bridged Ethernet networks that will carry fronthaul payloads in response to requirements contributed by the CPRI forum. The requirements can be divided into three categories: 1. Class 1: I/Q and Control and Management (C&M) data 2. Synchronization 3. Class 2: eCPRI which has been recently added IEEE 802.1CM-time-sensitive networking for fronthaul (http://www.ieee802.org/1/pages/802.1cm.html) Time-sensitive networking is a standard developed by IEEE 802.1Q to provide deterministic messaging on standard Ethernet. Time-sensitive networking scheme is centrally managed with guaranteed delivery and minimized jitter using scheduling for those real-time applications that require deterministic behavior. Time- sensitive networking is a data link layer protocol and as such is part of the Ethernet standard. The forwarding decisions made by the time-sensitive networking bridges use the Ethernet header contents and not the Internet protocol address. The payloads of the Ethernet frames can be anything and are not limited to Internet protocol packets. This means that time-sensitive networking can be used in any environment and can carry the payload of any application. There are five main components in the time-sensitive networking solution as follows [78]: Time-sensitive networking flow: The time-critical communication between end devices where each flow has strict timing requirements and each time-sensitive networking flow
is uniquely identified by the net- work devices. End devices: These are the source and destination of the time-sensitive networking flows. The end devices are running an application that requires deterministic communication. Bridges: Also referred as Ethernet switches, these are special bridges capable of transmitting the Ethernet frames of a time-sensitive networking flow on schedule and receiving Ethernet frames of a time-sensitive networking flow according to a schedule. Central network controller: A proxy for the network comprising the time-sensitive networking bridges and their interconnections, and the control applications that require deterministic communication. The central network controller defines the schedule based on which all time-sensitive networking frames are transmitted. Centralized user configuration: An application that communicates with the central network controller and the end devices and represents the control applications and the end devices. The central- ized user configuration makes requests to the central network controller for deterministic communication with specific requirements for the flows. 66 Chapter 1 I/Q and C&M data can be transported independently. The round-trip delay for I/Q is limited to 200 us and maximum frame error rate is 10-7. The C&M data has more relaxed time budgets. Synchronization signals represent an interesting aspect, with a wide range of requirements driven by wireless standards such as 3GPP LTE/NR. Four classes have been defined [75]: 1. Class A + : Strictest class with time error budget of 12.5 ns (one way) for applications such as MIMO and transmit diversity. Class A: Time error budget up to 45 ns for applications including contiguous intra- cell CA. Class B: Budgets up to 110 ns for non-contiguous intra-cell CA. 4. Class C: The least strict class delivers a budget up to 1.5 us from the primary reference time clock40 to the end application clock recovery output. The above synchronization requirements, which continue to become more stringent with the new r
eleases of the standard, pose new challenges for network designers. Traditional backhaul networks mostly rely on GPS receivers at cell sites. It is the simplest solution from the perspective of backhaul network design, but GPS systems have their own vulner- abilities and are not available in certain locations (e.g., deep indoor environments). Therefore, operators around the world have increasingly begun to deploy precision time protocol (PTP)/IEEE 1588v24 scheme as a backup mechanism and in some cases as the primary synchronization source in the absence of a viable GPS-based solution. Standard bodies such as ITU-T have continued to refine and enhance the architectures and metrics for packet-based synchronization networks in parallel with the development of a new fronthaul. The ITU-T G.826x and G.827x 42 series provide a rich set of documents that define the architectures, profiles, and network limits for frequency and time/phase synchro- nization services. Phase synchronization exhibits an especially interesting challenge for syn- chronization experts as pointed out by the above fronthaul synchronization requirements. PTP has been defined to synchronize the time and phase of end applications to a primary reference. The PTP protocol continuously measures and attempts to eliminate any offset between the phase of the end application and the primary reference. However, in conven- tional Ethernet networks, packet delay variation has posed a major challenge to transferring acceptable clock qualities in wireless applications. Ethernet switch manufacturers responded The primary reference time clock provides a reference time signal traceable to a recognized time standard IEEE Std 1588-2008: IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems(https://standards.ieee.org/findstds/standard/1588-2008.html). G.826: End-to-end error performance parameters and objectives for international, constant bit-rate digital paths and connections (https://www.itu.int/rec/T-REC-G.826/en
) and G.827: Availability performance para- meters and objectives for end-to-end international constant bit-rate digital paths (https://www.itu.int/rec/T- REC-G.827/en). 5G Network Architecture 67 to this challenge by delivering new classes of PTP-aware nodes such as boundary clocks and transparent clocks. PTP-aware nodes are being increasingly deployed in wireless access networks around the world. While the packet delay variation is not a major concern for these deployments, timing error analysis remains a major point of focus. Timing error defines the difference between the time of a clock at any relevant part of the network and the time of a reference clock such as one delivered by a GPS source at another part of the network. It can result from network asymmetries and node configuration/performance issues [84]. In order to relax the excessive latency and capacity constraints on the fronthaul, the opera- tors and vendors have revisited the concept of C-RAN and considered more flexible distri- bution of baseband functionality between the RRH and the BBU pool. Instead of centralizing the entire BBU processing on the cloud, by dividing the physical receive and transmit chain in different blocks, it is possible to keep a subset of these blocks in the RRH. By gradually placing more and more BBU processing at the edge of the network, the fronthaul capacity requirement becomes less stringent. Nevertheless, partial centralization has two main drawbacks, both relating to the initially envisioned benefits of C-RAN: (1) RRHs become more complex, and thus more expensive; and (2) de-centralizing the BBU processing reduces the opportunities for multiplexing gains, coordinated signal proces- sing, and advanced interference avoidance schemes. Consequently, flexible or partial cen- tralization is a trade-off between what is gained in terms fronthaul requirements and what is lost in terms of C-RAN features. Another key question is how the information between the RRH and the BBU is transported over the fronthaul link. A number of
fronthaul transmission protocols have been studied since the inception of the C-RAN architecture. However, transport schemes such as CPRI have been predominantly considered for carrying raw I/Q samples in a traditional C-RAN architecture. Considering the potential for various functional splits between the BBU and the RRH, different types of information might need to be transported over the fronthaul link. Given the extensive adoption of Ethernet in the data centers and the core network, RoE could be a generic, cost-effective, off-the-shelf alternative for fronthaul transport. Furthermore, while a single fronthaul link per RRH to the BBU pool has usually been assumed, it is expected that the fronthaul network will evolve to more complex multi-hop topologies, requiring switching and aggregation. This is further facilitated by a standard Ethernet approach. Nevertheless, packetization over the fronthaul introduces some addi- tional concerns related to latency and overhead. As information arriving at the RRH and/or BBU needs to be encapsulated in an Ethernet frame, header-related overhead is introduced per frame. To ensure that this overhead is small, and does not waste the potential bandwidth gains from baseband functional splitting, it would be desirable to fill an Ethernet payload before sending a frame. However, waiting to fill a payload introduces additional latency. Hence, it is important to consider the impact of packetization on the fronthaul bandwidth 68 Chapter 1 and latency, in conjunction with possible functional splits between RRH and BBUs, in order to understand the feasibility and potential gains of different approaches. In the study item for the new radio access technology, 3GPP studied different functional splits between the CU and the DU [34]. Fig. 1.26 shows the possible functional splits between the CU and the DU. After months of discussions between the opponents and propo- nents of open interfaces, 3GPP initially decided to specify two out of eight possible func- tional splits, that is, options 2
and 7, but no agreement on option 7 could be reached. Note that option 8 has already been used in LTE and previous generations, where CPRI was the main fronthaul interface transport scheme. As we move from left to right in Fig. 1.26, the split point moves from layer-3 protocols and functions to the lower layer protocols and functions, that is, layers 1 and 2, and ultimately I/Q samples transmission over the fronthaul. Split option 7 has three variants, as shown in Fig. 1.27, depending on what aspects of physical layer processing are performed in the DU/RRUs. In the following, a detailed description of each functional split and its corresponding advantages and disadvantages are provided [34]: Option 1: This functional split option is similar to the reference architecture for dual connectivity. In this option, the RRC sublayer is located in the CU. The packet data convergence protocol (PDCP), radio link control (RLC), MAC, physical layer, and RF functional processing are located in the DU(s). This option allows separate user-plane connections (split bearers) while providing centralized RRC and management. It may, in some circumstances, provide benefits in handling edge computing or low-latency use cases where the user data must be stored/processed in the proximity of the transmission point. However, due to the separation of RRC and PDCP, securing the interface in prac- tical deployments may affect performance of this option. Furthermore, the RRUs will be more complex and expensive due to the additional hardware for local processing of layer-1 and layer-2 functions. Option 2: This functional split is similar to option 1 except PDCP functions are also co-located with the RRC functions in the CU. This option would allow traffic aggre- gation from NR and LTE transmission points to be centralized. It can further facilitate management of traffic load between NR and LTE links. Note that the PDCP-RLC split was already standardized in LTE under dual-connectivity work item. In addition, this option can be implemented by sep
arating the RRC and PDCP for the control-plane stack and the PDCP for the user-plane stack into different central entities. This option enables centralization of the PDCP sublayer, which may be predominantly affected by user-plane process and may scale with user-plane traffic load. Option 3: In this option, the lower RLC functions, MAC sublayer, physical layer, and RF functions are located in the DU, whereas PDCP and higher RLC functions are Centralized (cloud) Distributed F1 interface To/from vEPC/5GC Central unit (CU) Distributed unit (DU) To/from RRH Upper Lower Upper Lower Upper Lower Option 1 Option 2 Option 3 Option 4 Option 5 Option 6 Option 7 Option 8 Upper Lower Upper Lower Upper Lower Figure 1.26 Functional split between the CU and the DU [ 34 Data channel/Control channel/PBCH Data channel/Control channel Option 6 Option 6 Coding Decoding Rate matching Rate de-matching Cell specific signals(e.g., SS, CSI-RS) UE specific signals(e.g., DMRS) PRACH Scrambling De-scrambling Option 7-3 Option 7-3 downlink only downlink only Modulation Signal generation Demodulation Channel estimation Layer mapping Layer mapping Channel estimation PRACH detection /equlization & IDFT Option 7-2 Option 7-2 Precoding Precoding RE mapping RE de-mapping Option 7-2a Option 7-2a Digital BF Digital BF Option 7-1 Option 7-1 IFFT/CP insertion FFT/CP removal PRACH filtering Digital to analog conversion Analog to digital conversion Analog BF Analog BF Figure 1.27 Variants of option 7 functional split 34 5G Network Architecture 71 implemented in the CU. Depending on real-time/non-real-time RLC processing require- ments, the lower RLC may include segmentation functions and the higher RLC may include ARQ and other RLC functions. In this case, the centralized RLC functions will segment the RLC PDUs based on the status reports, while the distributed RLC functions will segment the RLC PDUs into the available MAC PDU resources. This option will allow traffic aggregation from NR and LTE transmission points to be centralized. It can further facil
itate the management of traffic load between NR and LTE transmission points and it may have a better flow control across the split. The ARQ functions located in the CU may provide centralization gains. The failure over transport network may also be recovered using the end-to-end ARQ mechanism at the CU. This may provide more protection for critical data and control-plane signaling. The DUs without RLC functions may handle more connected mode UEs as there is no RLC state information stored and hence no need for UE context. Furthermore, this option may facilitate imple- mentation of the integrated access and backhaul to support self-backhauled NR trans- mission points. It was argued that this option may be more robust under non-ideal transport conditions since the ARQ and packet ordering are performed at the CU. This option may reduce processing and buffering requirements in the DUs due to absence of ARQ protocol. This option may provide an efficient way for implementing intra-gNB mobility. Nonetheless, this option is more prone to latency compared to the option with ARQ in the DUs, since retransmissions are susceptible to transport network latency. In an alternative implementation, the lower RLC functional group may consist of transmitting side of RLC protocol associated with downlink transmission and the higher RLC functional group may comprise the receiving side of RLC protocol, which are related to uplink transmission. This functional regrouping is not sensitive to the trans- mission network latency between the CU and the DU and uses interface format inher- ited from the legacy interfaces of PDCP-RLC and MAC-RLC. Since the receiving side of RLC protocol is located in the CU, there would be no additional transmission delay of PDCP/RLC reestablishment procedure when submitting the RLC SDUs43 to PDCP. Furthermore, this alternative does not impose any transport constraint, for example, A service data unit is a specific unit of data that has been passed down from an open-system interconnection layer to a lower layer
, which the lower layer has not yet encapsulated into a protocol data unit. A service data unit is a set of data that is sent by a user of the services of a given layer, and is transmitted semantically unchanged to a peer service user. The protocol data unit at a layer N is the service data unit of layer N 1. In fact, the service data unit is the payload of a given protocol data unit. That is, the process of changing a service data unit to a protocol data unit consists of an encapsulation process, performed by the lower layer. All data contained in the service data unit becomes encapsulated within the protocol data unit. The layer N - 1 adds headers/subheaders and padding bits (if necessary to adjust the size) to the service data unit, transforming it into the protocol data unit of layer N. The added headers/subheaders and padding bits are part of the process used to make it possible to send data from a source node to a destination node. 72 Chapter 1 transport network congestion. Nevertheless, due to performing flow control in the CU and the RLC transmit side in the DU, double buffering is needed for transmission. Option 4: In this case, MAC sublayer, physical layer, and RF functions are processed in the DUs, whereas PDCP and RLC protocols are processed in the CU. No particular advantage was shown for this option. Option 5: In this option, the RF, physical layer, and some part the MAC sublayer func- tions (e.g., HARQ protocol) are implemented in the DUs. The upper protocol stack is implemented in the CU. Therefore by splitting the MAC sublayer into two entities (e.g., upper and lower MAC), the services and functions provided by the MAC sublayer will be implemented in the CU and/or the DU. As an example, the centralized scheduling function located in the upper MAC will be in charge of the control of multiple lower MAC sublayers. The inter-cell interference coordination located in the upper MAC will be responsible for interference coordination. Time-critical functions in the lower MAC may include functions with str
ingent delay requirements (e.g., HARQ protocol) or the functions where performance is proportional to latency (e.g., radio channel and signal measurements at physical layer or random access control). Radio-specific functions in the lower MAC can perform scheduling-related processing and reporting. They can also control activities of the configured UEs and report statistics periodically or on-demand to the upper MAC. This option will allow traffic aggregation/distribution from/to NR and LTE transmission points. Moreover, it can facilitate the management of traffic load between NR and LTE transmission points. In this option, the requirement for fronthaul bandwidth and latency can be relaxed depending on the load across access and core net- work interface. It allows efficient interference management across multiple cells and enhanced coordinated scheduling schemes such as multipoint transmission/reception. Option 6: In this option, the physical layer and RF functions are implemented in the DU, whereas the upper protocol layers are located in the CU. The interface between the CU and the DUs carries data, configuration, and scheduling-related information, as well as measurement reports. This option will allow centralized traffic aggregation from NR and LTE transmission points, which can facilitate management of traffic load between NR and LTE access nodes. The fronthaul requirements in terms of throughput are reduced as the payload for this option are transport block bits. Joint transmission and scheduling is also possible in this case since MAC sublayer is centralized. This option may require subframe-level timing synchronization between MAC sublayer in the CU and physical layer in the DUs. Note that round-trip fronthaul delay may affect HARQ timing and scheduling. Option 7: In this option as shown in Fig. 1.27, the lower physical layer functions and RF circuits are located in the DU(s). The upper protocol layers including the upper physical layer functions reside in the CU. There are multiple realizations of this op
tion including asymmetrical implementation of the option in the downlink and uplink (e.g., option 7-1 in the uplink and option 7-2 in the downlink). A compression technique may 5G Network Architecture 73 be applied to reduce the required transport bandwidth between the CU and the DU. This option will allow traffic aggregation from NR and LTE transmission points to be cen- tralized and can facilitate the management of traffic load between NR and LTE trans- mission points. This option can to some extent relax the fronthaul throughput requirements and allows centralized scheduling and joint processing in both transmit and receive sides. However, it may require subframe-level timing synchronization between the fragmented parts of physical layer in the CU and the DUs. The following represent different forms where this option can be implemented: Option 7-1: In this variant, the fast Fourier transform (FFT), CP removal (OFDM processing), and possibly PRACH processing is implemented in the uplink and in the DUs, and the remaining physical layer functions reside in the CU. In the down- link, inverse FFT (IFFT) and CP insertion blocks (OFDM processing) reside in the DUs and the rest of physical layer functions will be performed in the CU. This vari- ant would allow implementation of advanced receivers. Option 7-2: In this variant, FFT, CP removal, resource de-mapping, and possibly MIMO decoding functions are implemented in the DU in the uplink and the remain- ing physical layer functional processing are performed in the CU. In the downlink, IFFT, CP addition, resource mapping, and MIMO precoding functions are per- formed in the DU, and the rest of physical layer processing is performed in the CU. This variant also allows the use of advanced receivers for enhanced performance. Option 7-3: This downlink only option implements the channel encoder in the CU, and the rest of physical layer functions are performed in the DU(s). This option can reduce the fronthaul throughput requirements as the payloads consist of the encoded bi
ts. Option 8: In this option, RF functionality is in the DU and the entire upper layer func- tions are located in the CU. Option 8 allows for separation of RF and the physical layer and further facilitates centralization of processes at all protocol layer levels, resulting in very tight coordination of the RAN and efficient support of features such as CoMP, MIMO, load balancing, and mobility. This option will allow traffic aggregation from NR and LTE transmission points to be centralized. Moreover, it can facilitate the man- agement of traffic load between NR and LTE transmission points, yielding high degree of centralization and coordination across the entire protocol stack, which enables more efficient resource management and radio performance. Separation between RF and physical layer decouples RF components from physical layer updates, which may improve their respective scalability. Separation of RF and physical layer allows reuse of the RF components to serve physical layers of different radio access technologies and allows pooling of physical layer resources, which may enable cost-efficient dimension- ing of the physical layer. It further allows operators to share RF components, which may reduce system and site development/maintenance costs. However, it results in more stringent requirements on fronthaul latency, which may cause constraints on net- work deployments with respect to network topology and available transport options, as 74 Chapter 1 well as rigorous requirements on fronthaul bandwidth, which may imply higher resource consumption and costs in transport mechanisms. There are strict timing, frequency, and synchronization requirements for the fronthaul links. In particular, 3GPP has imposed stringent latency requirements on transporting I/Q signals over the fronthaul, which pose certain challenges for system designers [11,14]. CPRI trans- port for fronthaul requires a low latency link and 200 us is a generally accepted value for the round-trip latency, which limits the length of the fronthaul links
to about 20 km. CPRI implementations require tight frequency and timing synchronization and accurate time of day (ToD) clock synchronization. A frequency precision of 16 ppb and ToD accuracy within 1.5 us are required for CPRI transport. The advantage of any functional split mainly depends on the availability of an ideal or non-ideal transport network. In a non-ideal fronthaul transport case, the functional split needs to occur at a higher level in the protocol stack, which reduces the level of centrali- zation that can be achieved through C-RAN. In this case, the synchronization and band- width requirements can be relaxed at the expense of some of these 4G/5G RAN features such as massive MIMO, CA, and multipoint joint processing. The following solutions can be used to help overcome these obstacles. White Rabbit technology44 is a combination of physical layer and PTP timing. White Rabbit introduces the technique of measuring and compensation for asymmetry to mitigate time and phase transfer errors. White Rabbit pro- vides sub-nanosecond timestamp accuracy and pico-second precision of synchronization for large distributed systems and allows for deterministic and reliable data delivery. To achieve sub-nanosecond synchronization, White Rabbit utilizes synchronous Ethernet (SyncE)45 to achieve synchronization and IEEE 1588v2. White Rabbit uses the PTP to achieve sub-nanosecond accuracy. A two-way exchange of the PTP synchronization mes- sages allows precise adjustment of clock phase and offset. The link delay is known pre- cisely via accurate hardware timestamps and the calculation of delay asymmetry. Alternatively, partial timing support compatible with ITU-T G.8275.2 standard can be used where the position of a grandmaster clock is moved closer to the PTP slaves in the RRHs. This is an excellent alternative to full on-path support White Rabbit for those operators who are not willing or cannot upgrade their networks for White Rabbit physical layer support. White Rabbit is a multidisciplinary project for development