text
stringlengths
2.05k
2.05k
pe II CSI targets MU-MIMO scenarios, transmission is limited to a maximum of two layers per device [14]. In order to reduce the overhead due to transmission of reference signals, 3GPP NR has moved away from the notion of continuously transmitted wideband cell-specific reference signals and instead has defined more flexible and configurable UE-specific reference signals transmitted on-demand. This transition has resulted in the definition of new UE measure- ment procedures based on the CSI-RSs. For this purpose, new metrics have been defined that will be explained in detail in the following: Synchronization Signal-Reference Signal Received Power (SS-RSRP) is defined as the linear average over the power contributions [in (Watts)] of the resource elements that carry secondary synchronization signals. The measurement time resource(s) for SS- RSRP are confined within SS/PBCH block measurement time configuration (SMTC) window size. If SS-RSRP is used for L1-RSRP as configured by reporting configura- tions, the measurement time resource(s) restriction by SMTC window size is not appli- cable. For SS-RSRP calculation, DM-RSs for PBCH and, if indicated by higher layers, CSI-RSs in addition to the secondary synchronization signals may be used. The SS- RSRP using DM-RS for PBCH or CSI-RS are measured by linear averaging over the power contributions of the resource elements that carry corresponding reference signals by considering the power scaling for the reference signals. If SS-RSRP is not used for L1-RSRP, the additional use of CSI-RSs for SS-RSRP determination is not applicable. The SS-RSRP is measured only over the reference signals corresponding to SS/PBCH blocks with the same SS/PBCH block index and the same physical-layer cell identity. If SS-RSRP is not used for L1-RSRP and higher layers indicate certain SS/PBCH blocks for performing SS-RSRP measurements, then SS-RSRP is measured only from the indi- cated set of SS/PBCH block(s). For frequency range 1, the reference point for the SS- RSRP is the antenna connector of
the UE. For frequency range 2; however, the SS- RSRP is measured based on the combined signal from antenna elements corresponding to a given receiver branch. For frequency ranges 1 and 2 if receiver diversity is used by the UE, the reported SS-RSRP value must not be lower than the corresponding SS- RSRP of any of the individual receiver branches [10]. SS block-based RRM measurement timing configuration or SMTC is the measurement window periodicity/ duration/offset information for UE RRM measurement per carrier frequency. For intra-frequency connected mode measurement, up to two measurement window periodicities can be configured. For the idle mode mea- surements, a single SMTC is configured per carrier frequency. For inter-frequency connected mode measure- ments, a single SMTC is configured per carrier frequency. New Radio Access Physical Layer Aspects (Part 2) 507 Table 4.13: NR carrier received signal strength indicator measurement symbols [10]. OFDM Signal Indication OFDM Symbol Indices SS-RSSI-MeasurementSymbolConfig {0,1} {0,1,2,...,10,11} {0,1,2,...,5} {0,1,2,...,7} Secondary Synchronization Signal-Reference Signal Received Quality (SS-RSRQ) is defined as the ratio of NRB X SS_RSRP/NR_carrier_RSSI, where NRB is the number of resource blocks in the received signal strength indicator (RSSI) measurement bandwidth of the NR carrier. The latter measurements are conducted over the same set of resource blocks. The NR carrier RSSI, NR_carrier_RSSI, comprises the linear average of the total received power [in (Watts)] observed over certain OFDM symbols within measure- ment time resource(s), in the measurement bandwidth, over NRB number of resource blocks from all sources, including co-channel serving and non-serving cells, adjacent channel interference, and thermal noise. The measurement time resource(s) for NR_carrier_RSSI are confined within SMTC window duration. If indicated by higher layers, for a half-frame with SS/PBCH blocks, NR_carrier_RSSI is measured over OFDM symbols of the indicated slots shown in Table
4.13; otherwise, if measurement gap is not used, NR_carrier_RSSI is measured over OFDM symbols within SMTC window duration and, if measurement gap is used, NR_carrier_RSSI is measured over OFDM symbols cor- responding to overlapped time span between SMTC window duration and minimum mea- surement time within the measurement gap [10]. If higher layers indicate certain SS/PBCH blocks for performing SS-RSRQ measurements, then SS-RSRP is measured only over the indicated set of SS/PBCH block(s). For frequency range 1, the reference point for the SS- RSRQ is the antenna connector of the UE. For frequency range 2, NR_carrier_RSSI is measured based on the combined signal from antenna elements corresponding to a given receiver branch, where the combining function for NR_carrier_RSSI is the same as the one used for SS-RSRP measurements. For frequency range 1 and 2, if receiver diversity is used by the UE, the reported SS-RSRQ value must not be lower than the corresponding SS-RSRQ of any of the individual receiver branches [10]. CSI-Reference Signal Received Power (CSI-RSRP) is defined as the linear average over the power contributions (in Watts) of the resource elements that carry CSI-RSs config- ured for RSRP measurements within the measurement frequency region in the prede- fined CSI-RS occasions. For CSI-RSRP calculation, the CSI-RSs transmitted on antenna port 3000 or antenna ports 3000, 3001 are used. For frequency range 1, the reference point for the CSI-RSRP is the antenna connector of the UE. For frequency range 2; 508 Chapter 4 however, the CSI-RSRP is measured based on the combined signal from antenna ele- ments corresponding to a given receiver branch. For frequency ranges 1 and 2, if receive diversity is used by the UE, the reported CSI-RSRP value must not be lower than the corresponding CSI-RSRP of any of the individual receiver branches [10]. CSI-Reference Signal Received Quality (CSI-RSRQ) is defined as the ratio of NRB X CSI_RSRP/CSI_RSSI, where NRB denotes the number of resource blocks used in the CSI-RSSI
measurement bandwidth. The latter measurements are conducted over same set of resource blocks. CSI-Received Signal Strength Indicator (CSI-RSSI) is the linear average of the total received power (in Watts) observed over the OFDM symbols within measurement time resource(s), in the measurement bandwidth, over NRB number of resource blocks from all sources, including co-channel serving and non-serving cells, adjacent channel inter- ference, and thermal noise. The measurement time resource(s) for CSI-RSSI corre- sponds to OFDM symbols containing configured CSI-RS occasions. For CSI-RSRQ calculation, CSI-RSs transmitted on antenna port 3000 are used. For frequency range 1, the reference point for the CSI-RSRQ is the antenna connector of the UE, whereas for frequency range 2, CSI-RSSI is measured based on the combined signal from antenna elements corresponding to a given receiver branch, where the combining for CSI-RSSI is the same as the one used for CSI-RSRP measurements. For frequency ranges 1 and 2, if receive diversity is used by the UE, the reported CSI-RSRQ value must not be lower than the corresponding CSI-RSRQ of any of the individual receiver branches [10]. Synchronization Signal-Signal-to-Interference Plus Noise Ratio (SS-SINR) is defined as the linear average over the power contribution (in Watts) of the resource elements carry- ing secondary synchronization signals divided by the linear average of the noise and interference power contribution (in Watts) over the resource elements carrying second- ary synchronization signals within the same frequency region. The measurement time resource(s) for SS-SINR are confined within SMTC window duration. For SS-SINR cal- culation, the DM-RSs associated with PBCH in addition to the secondary synchroniza- tion signals may be used. If the RRC signaling identifies certain SS/PBCH blocks for conducting SS-SINR measurements, then SS-SINR is measured only over the set of SS/ PBCH block(s) identified via signaling. For frequency range 1, the reference point for the SS-SINR is
the antenna connector of the UE, whereas for frequency range 2, the SS-SINR is measured based on the combined signal from antenna elements correspond- ing to a given receiver branch. For frequency ranges 1 and 2, if receiver diversity is used by the UE, the reported SS-SINR value must not be lower than the corresponding SS-SINR of any of the individual receiver branches [10]. CSI-SINR is defined as the linear average over the power contribution (in Watts) of the resource elements carrying CSI-RSs divided by the linear average of the noise plus interference power contributions (in Watts) over the resource elements carrying CSI- RSs within the same frequency region. For CSI-SINR calculation, the CSI-RSs New Radio Access Physical Layer Aspects (Part 2) 509 transmitted on antenna port 3000 are used. For frequency range 1, the reference point for the CSI-SINR is the antenna connector of the UE, whereas for frequency range 2, the CSI-SINR is measured based on the combined signal from antenna elements corre- sponding to a given receiver branch. For frequency ranges 1 and 2, if receive diversity is used by the UE, the reported CSI-SINR value must not be lower than the correspond- ing CSI-SINR of any of the individual receiver branches [10]. SRS-RSRP is defined as linear average of the power contributions (in Watts) of the resource elements carrying the uplink SRS. The SRS-RSRP is measured over the con- figured resource elements within the measurement frequency region and over the time resources in the predefined measurement occasions. For frequency range 1, the refer- ence point for the SRS-RSRP is the antenna connector of the UE. If receive diversity is used by the UE, the reported SRS-RSRP value must not be lower than the correspond- ing SRS-RSRP of any of the individual receiver branches [10]. The NR supports operating frequencies in a wide range from 450 MHz to 52.6 GHz (and higher in the future releases). The main challenge in NR is to overcome the higher propaga- tion loss and sensitivity to blockage in the above
6 GHz frequency bands. To overcome this issue, efficient usage of highly directional beamformed transmission and reception using a larger number of antenna elements is crucial at the gNB and the UE. To achieve large beamforming gain with reasonable implementation complexity, hybrid beamforming was found to be a suitable solution. The analog beams on each panel/subarray are adapted through phase shifters and/or amplitude scaling. Digital beamforming is adapted by apply- ing different digital precoders across panels/subarrays. At the gNB, downlink transmission with analog beamforming, that is, transmitter beam pointing toward a certain direction can only cover a limited area due to its relatively narrow beam-width. Therefore, the gNB needs to utilize multiple transmit beams to cover the entire cell. Similarly, in the uplink the gNB needs to utilize multiple receive beams to receive the uplink transmissions from the entire cell. 4.1.6.2 Beam Management The new radio provides a set of mechanisms by which the UEs and the gNB can establish highly directional transmission links, typically using large-scale phased arrays, to benefit from the resulting beamforming gain and to sustain an acceptable communication quality. Directional links, nevertheless, require fine alignment of the transmitter and receiver beams that can be only achieved through a set of procedures known as beam management. The beam management procedures are essential to perform a variety of radio access network functions including the initial access for idle users, which allows a mobile UE to establish a physical connection with a gNB and beam tracking for connected users, which enables beam adaptation schemes, or handover, path selection and radio link failure (RLF) recovery procedures. In LTE, these control procedures are performed using omnidirectional 510 Chapter 4 transmission, and beamforming or other directional transmissions can only be performed after a physical link (user-plane) is established. In certain conditions such as operation in the mmW
ave bands, it may be necessary to exploit high antenna gains even during initial access and in general for improving the control channel coverage. However, directionality can significantly delay the access procedures and make the performance more sensitive to the beam alignment. The Rel-15 NR is designed to support analog beamforming in addition to digital precoding/ beamforming. In high frequencies, analog beamforming may be beneficial from an imple- mentation viewpoint despite the fact that analog beamforming may constrain the transmit/ receive beam to be formed in one direction at a given time and further requires beam sweeping, where the same signal is repeated in multiple OFDM symbols but on different transmit beams. Beam sweeping would ensure that the signal can be transmitted with high directionality in order to cover the entire cell area. The NR has specified control/signaling schemes to support beam management procedures including an indication to the device to assist the selection of an appropriate receive beam. For large number of antennas, beams are narrow and beam tracking may fail; therefore, beam-recovery procedures have been defined where a device can trigger the beam-recovery procedure. A cell may have multiple transmission points each with beams and the beam-management procedures to allow UE-transparent mobility for seamless handover between the beams of different transmission points. In addition, uplink-centric and reciprocity-based beam management is also possible by utilizing uplink signals [14]. In some cases, a suitable transmit/receive beam pair for the downlink transmission will also be a suitable beam pair for the uplink transmission direction and vice versa. The NR refers to this as downlink/uplink beam correspondence, where it is sufficient to determine a suitable beam pair in one direction and use the same beam pair in the opposite direction. Since beam management is not intended to track fast-varying and frequency-selective channels, beam correspondence does not require that downlink
and uplink transmissions to take place on the same carrier frequency; thus the concept of beam correspondence is also applicable to FDD systems in paired spectrum. The beam management is defined as the process of acquiring and maintaining a set of beams, which are originated at the gNB and/or the UE and can be used for downlink and uplink transmission and reception. The beam management process comprises the following functions as shown in Fig. 4.43 [11]: Beam sweeping: Covering a spatial area with a set of beams transmitted and received according to prespecified intervals and directions. The measurement process is carried out with an exhaustive search, that is, both UE and the base station have a predefined codebook of directions (each identified by a beamforming vector) that cover the entire angular space and are used sequentially to transmit/receive synchronization and refer- ence signals. New Radio Access Physical Layer Aspects (Part 2) 511 SS bursts TRP-level beam Sweeping for coverage SS blocks to obtain RACH resources UE selects the best TRP and UE TX/RX Beam aquisition UE receives RACH resource allocation RACH preamble UE-specific beam selection and beam forming Figure 4.43 Signals and messages exchanged during downlink beam management procedure [69]. Beam measurement: Evaluation of the quality of the received signal at the gNB or at the UE. Different metrics may be used for this purpose such as SNR, which is the aver- age of the received power on synchronization signals divided by the noise power. The measurements for the initial access are based on the SS blocks. The tracking is done using the measurements conducted on the SS bursts and the CSI-RSs, which include a set of directions that may cover the entire set of available directions based on the UE requirements. Beam determination: Selection of the suitable beam or beams either at the gNB or at the UE, according to the measurements obtained via the beam measurement procedure. This process further allows the TRP(s) or the UEs to select their own trans
mit/receive beam(s). In beam determination, the gNB and the UE find a beam direction to ensure good radio link quality for the unicast control and data channel transmission. Once a link is established, the UE measures the link quality of multiple transmit and receive beam pairs and reports the measurement results to the gNB. Furthermore, the UE mobil- ity, orientation, and channel blockage can change the radio link quality of the transmit- ted and received beam pairs. When the quality of the current beam pair degrades, the gNB and the UE can switch to another beam pair with better radio link quality. The gNB and the UE can monitor the quality of the current beam pair along with some other beam pairs and perform beam switching when necessary. When the gNB assigns a transmit beam to the UE via downlink control signaling, the beam indication procedure is used. Beam reporting: The procedure used by the UE to send beam quality and beam decision information to the gNB where the UE reports its observation of beamformed signal(s) based on beam measurement to the gNB. For the initial access, after beam 512 Chapter 4 determination, the UE must wait for the gNB to schedule the random-access channel (RACH) opportunity corresponding to the best direction that the UE has determined, for performing the random access and implicitly informing the selected serving infrastruc- ture of the optimal direction (or set of directions) through which it must steer its beam, in order to be properly aligned with the UE. As we mentioned earlier, with each SS block, the gNB will specify one or more RACH opportunities with a certain time and frequency offset and direction, SO that the UE knows when to transmit the RACH pre- amble. This may require an additional complete directional scan of the gNB, thus fur- ther increasing the time it takes to access the network. For beam tracking in the connected mode, the UE can provide feedback using the control channel that it has already established, unless there is a link failure and no directions can be
recovered using CSI-RS. In this case, the UE must repeat the initial access procedure or try to recover the link using the SS block bursts while the user experiences a service unavailability. Beam switching and recovery: Beam recovery involves a procedure when the link between the gNB and the UE can no longer be maintained and needs to be reestablished. These procedures are periodically repeated to update the optimal transmitter and receiver beam pair over time. There are two network deployment scenarios for NR, that is, non- standalone and standalone, which can affect the way these procedures are performed. In non-standalone scenario, an NR gNB uses an LTE cell as an anchor for the control-plane management and mobile terminals exploit multi-connectivity to maintain multiple connec- tions to different cells SO that any link failure can be overcome by switching data paths. However, such an option may not be available in standalone deployments. The beam man- agement procedure is further illustrated in Fig. 4.44. The beam management operation in general is based on the control messages which are periodically exchanged between transmit and receive nodes. The reference signals used for beam management depend on the state of the UE. In idle mode, the PSS, SSS, and PBCH DM-RS are used, whereas in the connected mode, the CSI-RS and SRS are used in the downlink and uplink, respectively. A radio connection between a gNB with NgNB-beam ana- log beams and a UE with NUE-beam analog beams has a total of NUE-beam RX beam pairs. Given that the number of TX/RX beams is typically large in mmWave bands to achieve sufficient coverage, efficient beam measurement and reporting procedures are important to ensure minimal overhead and UE complexity. A gNB capable of transmit- NgNB-beam analog beams can configure up to NRS reference signals for beam measure- ment. Each RS is beamformed with its associated analog beam pointing in a particular direction. The analog beam associated with each reference signal may also be kept fixed over cert
ain time intervals to allow the UE to test different RX beams for a given TX beam. In this manner, up to NgNB-beam NUE-beam beam pairs can be measured. New Radio Access Physical Layer Aspects (Part 2) 513 Initial gNB beam gNB beam refinement UE beam refinement acquisition Stage 1: Beam alignment (wide- Stage 2: Beam alignment (narrow- beam) beam) - Based on measurement on SS - Based on measurement on narrow- block or wide-beam CSI-RS beam CSI-RS (UE-specific) - Supports robust communications Supports high-rate communications (control channels) (data channels) UE autonomous beam selection in - Beam switching is network idle mode controlled and dynamic - Network-controlled beam switching in connected mode Figure 4.44 Illustration of beam management procedure 26]. The beamformed CSI-RS used for beam management can be transmitted either aperiodically or periodically. When the system is underloaded, beam sweeping across a small number of TX beams over a narrow angular area on an aperiodic basis for a given UE is sufficient. When the system loading increases, it would be more efficient to perform a periodic sweep over a wider angular area covering a larger number of UEs. Despite the fact that NgNB-beam NUE-beam beam pairs correspond to NgNB-beam X NUE-beam beam qualities along with their pair indicators, not all NgNB-beam X NUE-beam beam qualities need to be reported, because for each TX beam, only the measurement quality of one beam pair, that is, the optimal RX beam for the given TX beam, needs to be reported. The optimal RX beam is known at the UE based on the beam measurement, which is not reported. In the subsequent data or control transmission to the UE, the gNB indicates the index of the selected TX beam to the UE. The UE can use the latest optimal RX beam of the indicated TX beam, which is stored in the UE local memory upon beam measurement, for the purpose of reception. Moreover, within the NgNB - beam candi- date TX beams, the qualities and/or indices of 1 < Nbeam < NgNB-beam TX beams can be reported. If Nbea
m = 1, the UE may report the optimal TX beam index without the associated beam quality as a recommendation to the gNB for downlink beamforming. If Nbeam > 1, the UE can report the indices of Nbeam selected TX beams, for example, the best Nbeam beams, along with their measured relative/absolute qualities to the gNB. The gNB compares the quali- ties of the reported TX beams and selects one TX beam for downlink transmission. The NR also supports lower complexity beam management procedures that do not require explicit indi- cation of the selected TX beam to the UE. If the UE only reports the best TX beam index and 514 Chapter 4 only a single gNB TX-RX beam pair is used for transmission of both data and control, there is no need for explicit beam indication. In that case, the UE identifies the optimal RX beam for a given measurement, and the gNB uses the TX beam recommended by the UE for the subse- quent data and control transmissions. To receive the data and control transmissions, the UE uses the RX beam that it identified in the previous measurement. The beam quality metrics that can be used for beam measurement are for example RSRP, RSRQ, and SINR calculated based on CSI-RS. Different metrics result in different UE complexity and may be suitable different scenarios, for example, the RSRP measurement is relatively simple and more power efficient and allows fast measurement of a large number of beams, which is useful for initial beam acquisition. The CSI measurement is more complex but offers more accurate beam information, which can be used for beam refinement within a small group of candidate beams [57]. Following the initial access and connection set up, the device can assume that network transmissions to the device will use the same transmit beam that was used for SS block acquisition. Therefore, the device can assume that the receive beam which was used to acquire the SS block will also be a suitable beam for the reception of subsequent downlink transmissions. Similarly, subsequent uplink transmissions would use
the same beam that was used for the random-access preamble transmission, implying that the network can assume that the uplink receive beam established at the initial access will remain valid. In LTE, a UE continuously performs radio link monitoring of the channel quality of its serving cell to ensure sufficient coverage of control channels. If the link quality is consid- ered poor, the UE declares RLF and triggers a higher layer connection reestablishment pro- cedure, resulting in cell reselection. For a large number of antennas, the UE-specific beams are narrow, and beam tracking can fail, for example, when a moving object blocks the LoS path to the UE. This event is regarded as beam failure in NR. In this case declaring RLF and performing cell reselection is unnecessary since another beam from the same cell can be used to cover the UE. This physical layer procedure is an example of beam recovery, in which the UE continuously monitors a UE-specific periodic reference signal associated with the TX beam with which a control channel is transmitted. If the measured beam quality deteriorates, the UE declares beam failure and proceeds to identify an alternative candidate TX beam, selected from a set of UE-common TX beams used for the periodic beam sweep- ing of initial access signals. These beams are typically wider compared to UE-specific data beams. As a part of this TX beam determination, the UE also determines an appropriate RX beam to receive the indicated TX beam. When a new beam is detected, the UE transmits a beam recovery request message using a preconfigured uplink resource (which includes an identifier of the new beam) to the serving cell. The network then transmits a recovery response to the UE. If the response is successfully received by the UE, the beam recovery procedure is successful, and a new beam pair is established; otherwise, the UE may perform additional beam recovery attempts, which upon failure would force the UE to initiate the RLF procedure, which includes cell reselection [57]. New Radio Ac
cess Physical Layer Aspects (Part 2) 515 The beamforming weights for data transmission are typically obtained from codebook-based CSI reporting. Once the codebook is specified, the beamforming capability in azimuth and elevation dimension is restricted by the codebook size, parameters, and structure. Alternatively, non-codebook-based beamforming can be extended to enable a gNB to reuse beam management procedures for acquiring the beamforming weights. In particular, a UE is configured with K > 1 CSI-RS resources, each associated with a TX beam. The beamform- ing can be performed in either digital or analog domain. However, in lower carrier frequen- cies, digital beamforming may be predominantly utilized. The UE measures the configured K CSI-RS resources and selects NCSI VI K CSI-RS resources based on their respective quali- ties and reports the NCSI CRIs. The actual CSI (e.g., CQI/PMI/RI) measured from the selected CSI-RS resources are also reported together with CRIs. As such, beam reporting and CSI reporting can be combined to acquire CSI. When a certain degree of channel reci- procity is achievable, beamforming weights can be derived at the gNB by measuring uplink signal(s) transmitted from the UE. The CSI reporting from the UE can also be utilized by the gNB to determine the beamforming weights [57]. To enable analog beamforming at the UE receiver, different reference signals within the resource set should be transmitted in different symbols, allowing the receiver-side beam to sweep over the set of reference signals. At the same time, the device can assume that different reference signals in the resource set are transmitted using the same spatial filter or alternatively the same transmit beam. In general, a configured resource set includes a repetition flag that indicates whether a device can assume that all reference signals within the resource set are transmitted using the same spatial filter. For a resource set to be used for downlink receiver-side beam adjustment, the repetition flag should be set [9]. 4.1
.7 Channel Coding and Modulation Schemes Channel coding is one of the areas where NR is taking a completely different approach from LTE. In NR, LDPC coding has replaced the turbo coding that was previously used for LTE PDSCH/PUSCH coding and polar codes have replaced the tail biting convolutional codes used previously for LTE PDCCH/PUCCH/PBCH coding, except for very small block lengths where repetition/block coding may be used. Turbo codes generally have a low encoding complexity and high decoding complexity, whereas LDPC codes have more complex encoding and less complex decoding algorithms. Considering eMBB use cases with large code block sizes and the code rates up to 8/9, turbo codes may not meet the implementation complexity required for the decoder. The LDPC codes, on the other hand, have relatively simple and practical decoding algorithms. The decoding is performed by iterative belief propagation (BP). The accuracy of decoding will be improved in each iteration, and the number of iterations is decided based on the requirement of the application, providing a trade-off between the bit error performance, 516 Chapter 4 latency, and complexity. In terms of latency, LDPC codes are parallel in nature, while turbo codes are serial in nature, allowing the LDPC codes to better support low latency applications than turbo codes. Furthermore, the bit error rate (BER) of the turbo codes have higher error floor compared to that of LDPC, and the LDPC matrix can be extended to lower rates than LTE turbo codes, achieving higher coding gains for low-rate applications targeting high reli- ability. As for the polar codes, they were introduced in 2009 and they are among the capacity- achieving codes with low encoding and decoding complexity. They provide full flexibility with very good performance with any code length and code rate without error floor, that is, they do not suffer a decrease in the slope of BLER versus SNR. 4.1.7.1 Principles of Polar Coding In order to describe the polar encoding/decoding concept, let us review
a few prerequisites. The symmetric capacity is defined as the highest possible rate that can be achieved when all of the input symbols to the channel are equiprobable. The mutual information of a binary-input discrete memoryless channel (symmetric capacity) with input alphabet X = {0,1} is defined as follows [29]: 2W(y|x) The symmetric capacity is equal to the Shannon capacity when the channel W is a symmet- ric channel. The Bhattacharyya parameter Z(W) is the upper bound on the probability of a maximum likelihood decision error when transmitting 0 or 1 over the channel W. Thus the Bhattacharyya parameter Z(W) is a channel reliability measure. The Bhattacharyya parame- ter can be calculated as follows: The relationship between I(W) and Z(W) for any binary-input, discrete, memoryless channel W can be described as follows: which means I(W) = or I(W) 0, and only if Z(W) Z(W) = 1, respectively. Polar codes are the first type of forward error correction codes achieving the symmetric capacity for arbitrary binary-input discrete memoryless channel under low-complexity encoding and low-complexity successive cancelation (SC) decoding with order of O(N log N) for infinite length codes. Polar codes are founded based on several concepts including channel polarization, code construction, polar encoding, which is a special case of the normal encoding process (i.e., more structural) and its decoding concept [29]. New Radio Access Physical Layer Aspects (Part 2) 517 Channel polarization is the first phase of polar coding where N distinct channels are synthesized such that each of these channels is either completely noisy or completely noiseless, that is, strictly valid for infinite code length N. The measure of how much a channel is noisy in the context of the polar codes was first determined by the symmetric capacity or the Bhattacharyya parameter of the channel; however, BER was used later as a common measure. Code construction phase involves selecting channels in which the information bits are transmit- ted. In other words, c
onstructing a polar code means using a vector of bit-channel indices that would be used to transmit information. The rest of the bit-channels would have no data and contain the frozen bits. Several code construction algorithms that vary in complexity, precision, and BER performance exist which include evolution of Z-parameters-base code construction algorithm; Monte-Carlo simulation-based construction algorithm; density evolution-based code construction algorithm; Gaussian approximation-based algorithm; and transition proba- bility matrix-based algorithm [29]. Polar codes are a member of the coset 10 linear block code family, where the information are multiplied by a submatrix out of the traditional polar generator matrix, and the fro- zen bits are multiplied by another submatrix. Polar encoding is characterized by its struc- tural manner, in the sense that all parameters are static, independent of the code rate. Different code rates correspond to different number of information bits, while using the same generator matrix. The systematic polar encoding is an extended version of the nonsys- tematic polar encoding, where the codeword is first non-systematically encoded, bits at frozen bit-channel positions are reset to the values of the frozen bits, and then non- systematically encoded. Systematic encoding provides better performance in terms of BER than non-systematic encoding. However, both have the same BLER performance. There are various polar decoding algorithms including SC, SC list (SCL), SCL with (SCL-CRC), and BP. The SC decoder is based on the concept of successively decoding bits, where each stage of bit decoding is based on previously decoded bits. It suffers from inter-bit dependence due to its successive nature and thus error propagation. As a standalone decoder for polar codes, it is outperformed by most polar decoders in terms of BER performance. However, it enjoys a For a subgroup H of a group G and an element X of G, define xH to be the set {xh:he H} and Hx to be the set {hx:heH}. A subset of G in
the form of xH for some XE( is said to be a left coset of H and a subset of the form Hx is said to be a right coset of H. For any subgroup H, we can define an equivalence relation by x~y if x = yh for some heH. The equivalence classes of this equivalence relation are exactly the left cosets of H, and an element X of H is in the equivalence class xH. Thus the left cosets of H form a partition of G. It is also true that any two left cosets of H have the same cardinal number, and in particular, every coset of H has the same cardinal number as eH = H, where e is the identity element. Thus the cardinal num- ber of any left coset of H has cardinal number the order of H. The same results are true of the right cosets of G and, in fact, one can prove that the set of left cosets of H has the same cardinal number as the set of right cosets of H. 518 Chapter 4 potential for list decoding, because of its sequential hierarchical structure. It was proved that polar codes achieve Shannon capacity of any symmetric binary-input discrete memory- less channel under SC decoding [33,34]. SCL decoder was proposed as an extended version of SC decoder where instead of succes- sively computing hard decisions for each bit, it branches one SC decoder into two parallel SC-decoders at each stage of decision where each branch has its path metric that is continu- ously updated for each path. It can be shown that a list of size 32 is enough to almost achieve the ML bound. SCL with CRC decoder is an extension of SCL decoder, where a high-rate CRC code is appended to the polar code, SO that the correct codeword is selected among the candidate codewords from the final list of paths. It was observed that whenever an SCL-decoder fails, the correct codeword exists in the list. Therefore, the CRC was proposed as a validity check for each candidate codeword in the list. In BP decoder, unlike SC-based decoding techniques, there is no inter-bit dependence and thus no error propagation. It does not encounter any intermediate hard decisions. It updates the
LLR values iteratively through right-to-left and left-to-right iterations using the same update functions that were used in LDPC domain. For finite length codes, BP decoder out- performs SC decoder in terms of BER performance. Channel polarization is the concept upon which the polar codes are built. It is the process through which N distinct channels are generated W I<i<N from N independent copies of a binary-input discrete memoryless channel. The N generated channels are polarized and have mutual information either close to 0 (i.e., noisy channels) or close to 1 (i.e., noiseless channels). The synthesized channels become perfectly noisy/noiseless as N approaches infinity. The process of channel polarization consists of two phases namely channel combin- ing and channel splitting. In the former phase, N distinct channels are created in n = log2N steps, through recursively combining N copies of a binary-input discrete memoryless channel to form a vector channel WN:XN YN, , where N must be an integer power of two. In the second phase the channel WN is split into N binary-input channels X Xi-1. i < N. Uncoded information bits are transmitted over the reliable or noiseless channels with rate 1 and frozen bits are transmitted over the unreliable or noisy channels [33,34]. Polar code construction is the process of selecting the set of K good channels out of N channels over which uncoded information bits will be transmitted. The selection of the information set A is done in a channel dependent manner. For finite-length polar codes, the synthesized channels are not fully polarized. Bit errors over the quasi-polarized channels are inevitable. Thus the polar code construction phase is critical to obtain the best possible performance. To construct a polar code, the K reliable channels are chosen to minimize the sum of their Bhattacharyya parameter values EiEAZ(W) in order to minimize the upper New Radio Access Physical Layer Aspects (Part 2) 519 bound on the block error probability of the constructed polar code. For the bin
ary erasure channel, the Bhattacharyya parameter can be calculated using recursive formulas. Thus the polar code construction problem can be solved without a need for approximation. The Bhattacharyya parameter can be calculated using the following recursive formulas with a complexity of O(N) [29]: For the AWGN channel no efficient algorithm for calculating the Bhattacharyya parame- ter per synthesized channel is known. Approximating the exact polar code construction is possible to reduce complexity by calculating an estimate of the Bhattacharyya parameter per synthesized channel. Several suboptimal construction methods were proposed in the literature with different computational complexities. The main difference between polar codes and Reed-Muller codes is the choice of the information set A. In the case of Reed-Muller codes, the indices of the highest weight rows of the generator matrix G are selected to carry the information. A polar code of length N = 2n is generated using genera- tor matrix G of size NXN. A block of length N, consisting of N - K frozen bits and K information bits, is multiplied by G to produce the polar codeword X = uG. The generator matrix G is based on a kernel that is used to construct the code, where which x denotes the Kronecker product.11 A polar encoding lattice, equivalent to G, can also be used as a polar encoder, as shown in Fig. 4.45. Note that successive graph represen- tations have recursive relationships. More specifically, the graph representation for a polar encoding kernel operation having a kernel block size of N = 2 comprises a single stage, containing a single XOR. The first of the N = 2 kernel encoded bits is obtained as the XOR of the N = 2 kernel information bits, while the second kernel encoded bit is equal to the second kernel information bit. For greater kernel block sizes N, the graph representation may be considered to be a vertical concatenation of two graph representations for a kernel block size of N = 2, followed by an additional stage of XORs, as shown in Fig.
4.45. In analogy with the N = 2 kernel described above, the first N = 2 of the N kernel encoded Given a m X n matrix A and a p X q matrix B, the Kronecker product C=AQ B also called matrix direct product, is an mp X nq matrix with elements defined by cab = where a = p(i - 1) + and = 1. The matrix direct product provides the matrix of the linear transformation induced by the vector space tensor product of the original vector spaces. Assuming that operators S:V1 W1 and T:V2 W2 are given by S(x) = Ax and T(y) = By, then T:V1 V2 W1 x W2 is determined by SOT(xy) (Ax) X 520 Chapter 4 Size-2 polar encoder Size-8 polar encoder Size-4 polar encoder Figure 4.45 Polar encoder of different sizes [32]. Successive Channel Index selection Polar encoder cancellation Index selection decoder Encoder Decoder Figure 4.46 High-level polar code encoding and SC list decoding [40]. bits are obtained as XORs of corresponding bits from the outputs of the two N = 2 kernels, while the second N = 2 of the kernel encoded bits are equal to the output of the second N = 2 kernel [35]. Polar codes were introduced as non-systematic codes. Any linear code can be transformed from non-systematic to a systematic code. The systematic polar encoding can be performed using the standard non-systematic polar encoding apparatus in a three-phase operation as follows (see Fig. 4.46): 1. The vector u = (ua, is encoded in the standard non-systematic fashion producing the vector u. 2. The frozen bit positions A in the vector u are set to zero UAC=0 (the frozen bits are always set to zero here). 3. The modified vector u is encoded in the standard non-systematic fashion producing the codeword X which is a systematic polar codeword, in the sense that the information bits UA appear in the final codeword X in the information bits position . New Radio Access Physical Layer Aspects (Part 2) 521 The main advantage of the systematic polar codes is that its BER performance is better than non-systematic polar codes. However, both systematic and non-systematic polar codes
have the same BLER performance. It is observed that the systematic polar coding is more robust to error propagation using SC decoder when compared to that in the non-systematic polar coding. There are two main methods for polar decoding namely SC decoder and its variants and BP decoder. The SC decoder and its variants have serial decoding characteristics which cannot be parallelized; thus these decoders suffer from long decoding latency and low throughput, making them not suitable for high-speed applications. The BP decoder can use parallel processing and thus can enhance its throughput, making it suitable for high-speed applications. In the receiver, the role of the demodulator is to recover information pertaining to the encoded block. However, the demodulator is typically unable to obtain absolute confidence about the value of the bits in the encoded block due to the random nature of the noise in the communication channel. The demodulator may express its confidence about the values of the bits in the encoded block by generating a soft encoded block, which comprises N num- ber of encoded soft bits. Each soft bit may be represented in the form of an LLR as follows: LLR = log [p(bit = 0)] - log [p(bit = 1)] where p(bit = 0) and p(bit = 1) are the probabilities that the corresponding bit has a value of "0" and "1", respectively. A positive LLR indicates that the demodulator has greater confidence that the corresponding bit has a value of "0", while a negative LLR indicates greater confidence in the bit value of "1". The magnitude of the LLR corresponds to the confidence level, where an infinite value corresponds to abso- lute confidence, while a magnitude of zero indicates that the demodulator has no informa- tion about the bit value [29]. The polar decoder may operate based on different algorithms including SC decoding and SCL decoding. The SC decoder was the first decoder that was used for polar codes which consists of N decision elements for the N bits of (see Fig. 4.46). Each of the decision elements computes t
he hard-decision output based on the observed channel output yN and the previously decoded bits, for example, the kth decision element would compute ûk using y and -1 The decision element computes the likelihood ratio as follows [29]: The hard decision per decision element is generated according to the following rule: The decision elements of indices that belong to the set Ac are set to zero ûk = 0, that is, frozen bit positions and values can be considered as the decoder's prior knowledge. 522 Chapter 4 In the SC decoding process, the value selected for each bit in the recovered information block depends on the sign of the corresponding LLR, which in turn depends on the values selected for all previous recovered information bits. If this approach results in the selection of the incorrect value for a particular bit, then this will often result in propagation of errors in all subsequent bits. The selection of an incorrect value for an information bit may be detected with consideration of the subsequent frozen bits, since the decoder knows that these bits should have zero values. More specifically, if the corresponding LLR has a sign that would imply a value of "1" for a frozen bit, then this suggests that an error may have occurred during the decoding of one of the preceding information bits. However, in the SC decoding process, there is no opportunity to consider alternative values for the preceding information bits. Once a value has been selected for an information bit, the SC decoding process is final. This motivates SCL decoding, which enables a list of alternative values for the information bits to be considered. As the decoding process continues, it considers both options for the value of each successive information bit. More specifically, an SCL decoder maintains a list of candidate kernel information blocks, where the list and the kernel infor- mation blocks are built up as the SCL decoding process proceeds. At the start of the pro- cess, the list comprises only a single-kernel information block having a l
ength of zero bits. Whenever the decoding process reaches a frozen bit, a bit value of "0" is appended to the end of each candidate kernel information block in the list. However, whenever the decoding process reaches an information bit, two replicas of the list of candidate kernel information blocks is created. The bit value of "0" is appended to each block in the first replica and the bit value of "1" is appended to each block in the second replica. Following this, the two lists are merged to form a new list having a length which is double the length of the original list. This continues until the length of the list reaches a limit L, that is, the list size, which is typically a power of two. From this point onwards, each time the length of the list is dou- bled when considering an information bit, the worst L among the 2L candidate kernel infor- mation blocks are identified and pruned from the list. Thus the length of the list is maintained at L until the SCL decoding process is completed. In this process, the worst can- didate kernel information blocks are identified by comparing and sorting appropriate metrics that are computed for each block, based on the LLRs obtained on the left-hand edge of the polar code graph [35,38]. There are several challenges associated with the hardware implementation of polar encoders and, in particular, polar decoders. For example, the complexity of a polar decoder is much greater than that of a polar encoder for three reasons: (1) while polar encoders operate on the basis of bits, polar decoders operate on the basis of the probabilities of bits, which require more memory to store and more complex computations; (2) while polar encoders only have to consider the particular permutation of the information block that they are pre- sented, polar decoders must consider all possible permutations of the information block and must select the one which is the most likely; and (3) while polar encoders only process each New Radio Access Physical Layer Aspects (Part 2) 523 information block on
ce, an SCL polar decoder must process each information block L times in order to achieve sufficiently strong error correction. For these reasons, the latency, hard- ware resource usage, and power consumption of polar decoders are typically much greater than those of polar encoders. Another challenge in the implementation of the SCL decoding process is imposed by metric sorting. As described earlier, the sorting is required in order to identify and prune the worst L candidate kernel information blocks, among the merged list of 2L candidates. One option is to employ a large amount of hardware to simultaneously compare each of the 2L candidates with every other candidate, SO that the sorting can be completed within a short time. Alternatively, the hardware resource requirement can be reduced by structuring successive comparisons to efficiently reuse intermediate results at the cost of increasing the latency required to rank the 2L candidates. The CRC bits are employed by the NR polar code in order to facilitate error detection and to improve the error correction capability of the polar decoder. However, there is a trade-off between the error detection capability and the error correction capability. In order to meet the BLER requirements of NR for the control channels, the CRC bits must be handled very carefully, in a manner which is not captured in the NR specifications. In particular, the CRC (and par- ity check or PC) bits must be decoded as an integral part of the polar decoding process, using an unconventional decoding technique [35]. 4.1.7.2 NR Polar Coding 3GPP NR uses a variant of the polar code called distributed CRC (D-CRC) polar code, that is, a combination of CRC-assisted and PC polar codes, which interleaves a CRC- concatenated block and relocates some of the PC bits into the middle positions of this block prior to performing the conventional polar encoding described earlier. This allows a decoder to early terminate the decoding process as soon as any parity check is not successful. The D-CRC scheme is i
mportant for early termination of decoding process, because the post- CRC interleaver can distribute information and CRC bits such that partial CRC checks can be performed during list decoding and paths failing partial CRC check can be pruned, lead- ing to early termination of decoding. The post-CRC interleaver design is closely tied to the CRC generator polynomial, thus by appropriately selecting the CRC polynomial, one can achieve better early termination gains and maintain acceptable false alarm rate. The signal flow graph of D-CRC polar encoding and decoding is shown in Fig. 4.47. In NR, the polar code is used to encode broadcast channel as well as DCI and uplink control information (UCI). Let us denote by N the number of control bits that must be transmitted using a code of length E bits. We add LCRC CRC bits to the information bits, resulting in K bits that will be encoded by an (N,K) polar encode with N = 2n. Rate matching is per- formed to obtain a code of length E and effective rate R = Nc/E. To each vector (ao, containing Nc control information bits to be transmitted, LCRC-bit CRC is attached. The resulting vector C = (co, CK-1) comprising K = Nc + LCRC Chapter 4 CRC calculation Channel interleaver Rate matching circular Bit interleaver buffer Subchannel allocation Polar encoder Subblock interleaver + PC bits calculation Figure 4.47 3GPP NR polar encoding flow graph [36]. is passed through an interleaver. Based on the desired code rate R and codeword length E, a polar code of length N is utilized along with the relative bit channel reliability sequence and the frozen set. The interleaved vector c' is assigned to the information set along with the PC bits, while the remaining bits in the N-bit vector u are frozen. Vector u is encoded with d = uG, where the generator matrix G was defined earlier. After encoding, a subblock interleaver divides d into 32 equal-length blocks, scrambling them and creating vector y that is fed into the circular buffer as illustrated in Fig. 4.47. For rate matching, puncturing,
shortening, or repetition are applied to change the N-bit vector y into the E-bit vector A channel interleaver is finally applied to compute the vector f that is now ready to be mod- ulated and transmitted [36]. The NR polar encoder relies on several parameters that depend on the amount and type of information to be transmitted and the physical channel used. The first parameter that needs to be identified is the code length of the polar code, N = 2n. The number n is calculated as n = max {min {n1, N2, nmax}, nmin}, where nmin and nmax provide a lower and an upper bound on the code length, respectively. In particular, nmin=5 and nmax = 9 for the downlink control channel, whereas nmax 10 for the uplink control channel. The parameter N2 = log2 (K/Rmin) gives an upper bound on the code rate based on the minimum code rate admitted by the encoder, that is, 1/8. The value of parameter n1 is dependent on the rate-matching scheme. It is usually calculated as n1 = log2E SO that 2nd is the smallest power of two larger than E. However, a correction factor is introduced to avoid too severe rate matching: if log2E < 0.17, that is, if the smallest power of two larger than E is too far from E, the parameter is set to n1 = log2E In this case, an additional constraint on the code dimension is added by imposing KE <9/16 to ensure that K < N. If a code length New Radio Access Physical Layer Aspects (Part 2) 525 Table 4.14: 3GPP NR polar encoding parameters [7]. PUCCH/PUSCH PDCCH/PBCH No 20 12 < Nc 19 E - No VI 175 E - Nc > 175 N > E is selected, the polar code will be punctured or shortened, depending on the code rate before the transmission. In particular, if the code will be punctured; otherwise it will be shortened. If N < E, repetition is used, and some encoded bits will be transmitted twice. In this case, the code construction ensures that K <N. As shown in Table 4.14, a set of parameters are defined to differentiate between different type of control information. The parameters I LL and IBIL refer to the activation of the inpu
t bits interleaver and the channel interleaver, respectively. The value of the two types of assistant PC bits are given by NPC and npc. The length of the control information vector N and the length of the transmitted codeword E are dependent on the type, content, and number of consecutive transmissions and are reliant on the decisions taken in the higher layers [36]. The K bit output of the CRC encoder is interleaved before being fed to the polar encoder. The interleaver is activated through the I LL flag. In particular, the input bit interleaver is activated for PBCH and PDCCH payloads, while it is set to zero in the case of PUCCH and PUSCH control information. The input bit interleaver interleaves up to Kmax = 164 input bits, where the interleaving pattern is calculated based on the sequence IImax(m) given in [7]. The maximum number of input bits Kmax is set to 164 suggesting that the maximum number of control information bits without CRC is limited to 140. In more detail, (Kmax-K) is subtracted from all the entries of II(k), such that II(k) contains the integers smaller than K in permuted order. This scrambling sequence has been proposed to facilitate early termination, both during normal decoding and in DCI blind detection. This is made possible by the fact that after interleaving, every CRC remainder bit is placed after its rele- vant information bits. The interleaving function is applied to vector c to obtain the K - bit vector c' = (C II(0), CII(1), C II(K-1)) [7]. In the subchannel allocation process prior to polar encoding, the vector c' is expanded into the N-bit vector u with the addition of assistant bits and frozen bits. As a first step, NPC PC 526 Chapter 4 bits are inserted among the K information and CRC bits. Thus the polar encoder represents a (N, K + NPC) code. To create the input vector u to be encoded, the frozen set of subchan- nels needs to be identified. The number and position of frozen bits depend on N, E and the selected rate-matching scheme. To begin with, the frozen set ON and the com
plementary information set QUI are computed based on the polar reliability sequence the rate matching scheme. The information bits are subsequently assigned to vector u according to the information set. The assistant PC bits are calculated and stored in u, if necessary [7,36]. The first bits identified in the frozen set correspond to the indices of the N - E bits that are not transmitted, that is, the bits that are punctured from the codeword by the rate matching. These indices correspond to the first N - E or the last N - E codeword bits in the case of puncturing and shortening, respec- tively. Due to the presence of an interleaver between the encoding and the rate matching, the actual indices to be added to the frozen set correspond to the first or the last bits after the interleaving process. If K/E<7/16 and henceforth the polar code must be punctured, additional indices are included in the frozen set to prevent bits in the information set to become ineffective due to puncturing. Furthermore, new indices are added to the frozen set from the reliability sequence starting from the least reliable bits. The polar reliability sequence a list of integers smaller than 1024 sorted in reliability order, from the least reliable to the most reliable; indices larger than N are skipped during the creation of QM Unlike the conventional polar encoding process where the Bhattacharyya parameters or in general the reliability factors are calculated prior to encoding process, in 3GPP NR, those reliability factors are tabulated in the standard specification. Prior to encoding, the polar sequence --}} in which 1 for i=0,1, , Nmax denotes a bit index, is sorted in ascending order of reliability factors where W(QNmax) denotes the reliability of bit index QNmax For any code block of length N bits, the same polar sequence QN-1{,,.1 is utilized. The polar sequence QN-1 is a subset of polar sequence elements QNmax of values less than N, ordered in ascending order of reliability factors W(QN) W(QN) < W(QN) < <W(QN-1). In the preceding ex
pressions, sequence ON denotes the set of information bit indices in the polar sequence QN-1, and OF is the set of other bit indices in polar sequence QN-1 where Q and OF are derived through subblock interleaving, (i.e., cardinality of the set or the length of the sequence), and NPC is the number of PC bits (see Table 4.15). Table 4.15: Polar sequence QNmax 1 and the associated reliability factor W(QNmax) [7]. QNmax W(QNmax) QNmax W(QNmax) QNmax QNmax W(QNmax) QNmax QNmax QNmax QNmax 528 Chapter 4 K/E 59/512 K/E 56/864 K/E 43/184 K/E =43/180 K/E 67/128 K/E 54/124 K/E 163/240 K/E 164/240 K/E 307/360 K/E 164/184 SNR (dB) SNR (dB) Downlink polar coding, CRC-24, and list size = 8 Uplink polar coding, CRC-11, list-size = 8 Figure 4.48 Downlink/uplink polar coding with various K/E ratios [67]. Once the input bits are reordered and moved to the reliable positions according to the above reliable bit positions determination procedure, the reordered bits are passed through the polar encoder [7]. As we defined earlier, the polar code generator matrix GN = (G2) is constructed as the nth Kronecker power of matrix G=1|] For a bit index j = 0,1,...,N - 1, let gj denote the jth row of GN and w(g) the row weight of gj, where w(gj) is the number of ones in gj. Let us further assume that QPC is the set of bit indices for PC bits, where the car- dinality of the set is Qpc NPC. number of PC bits are placed in the (npc - nxm) least reliable bit indices in Q . . Other PC bits are placed in the bit indices of minimum row weight in ON where ON denotes the e npc) most reliable bit indices in Q If there are more than NPC bit indices of the same minimum row weight in ON the other PC bits are placed in the NPC bit indices of the highest reliability and the minimum row weight in Q . The output bit sequence following polar encoding d = [do,d1,..., dN-1] is obtained as = uGN where vector u = [uo,U1,..., UN-1] is derived from interleaved sequence following the above bit reordering process. The encoding is performed in GF(2)¹. The performance of
the polar codes with different K/E ratios is shown in Fig. 4.48. GF(2) is the Galois field comprising two elements and the smallest finite field. One may also define GF(2) as the quotient ring of the ring of integers Z by the ideal 2Z of all even numbers GF(2) = Z/2Z. New Radio Access Physical Layer Aspects (Part 2) 529 4.1.7.3 Principles of Low Density Parity Check Coding LDPC codes belong to the class of forward error correction codes which are used for send- ing a message over noisy transmission channel. These codes can be described by a parity check matrix which contains mostly zeros and a relatively small number of ones. Thus the decoding complexity is small when compared to other code constructions. A very efficient iterative decoding algorithm known as belief propagation is used in the decoder. The LDPC codes can be divided into two groups: regular LDPC codes when the column weight and the row weight of the PC matrix are constant and equal and irregular LDPC codes when the column weight and the row weight are not constant and equal, meaning that the number of ones per row and column is different. The LDPC codes are represented in different ways. Similar to all linear block codes, a matrix representation by the corresponding generator matrix G or the PC matrix H is possible. Thus if the number of input information bits is K and the number of output bits is N, the PC matrix H is expressed as an M X N matrix, where M = N K. The resultant code rate K/N defines the size of the PC matrix. The LDPC codes can be further graphically represented with a Tanner graph, which is one of the most common graphical representations for the LDPC codes. It provides the complete representation of the code and helps to describe the decoding algorithm. Tanner graphs are bipartite graphs, that is, there are two disjunct sets of nodes. The two types of nodes are variable nodes (VND) and check nodes (CND). The VNDs represent the code bits, thus, each of the N columns of matrix H is represented by one VND. The CNDs represent the code
con- straints; thus each of M rows of matrix H is represented by one CND. Each VND Vi is connected to a CND Cj, if hij = 1. For the following example PC matrix, the Tanner graph is as shown in Fig. 4.49. Check nodes Variable nodes Figure 4.49 Example Tanner graph [38]. 530 Chapter 4 0101100 1110010 0000001 1011010 Quasi-cyclic (QC) LDPC codes belong to the class of structured codes that are relatively eas- ier to implement without significantly compromising the performance of the code. The QC- LDPC codes can be implemented using simple shift registers with linear complexity based on their generator matrices. Well-designed QC-LDPC codes have been shown to outperform computer-generated random LDPC codes, in terms of bit-error rate and block-error rate performance and the error floor. These codes also have advantages in decoder hardware implementation due to their cyclic symmetry, which results in simple regular interconnec- tion and modular structure. In most of the wireless communication standards including 3GPP NR, a base graph u is used to define the LDPC code. However, u needs to be trans- formed into a PC matrix H using a lifting factor Z. Lifting means that each (integer) entry of base graph u is replaced by a permuted ZXZ identity matrix. We start with an identity matrix I and circularly shift the entries of this matrix according to the base graph entry Uij to obtain the desired matrix H. As an example, suppose a 2 X 2 base graph matrix u and lifting factor Z = 3 are given. The transformation from u to H can be performed as follows: 001100 100010 u= 2 3 23 , I= 010001 100010 010001 001100 The LDPC codes are universally specified by their PC matrices. The PC matrix of a QC-LDPC code is given as an array of sparse circulant matrices of the same size. A circulant matrix is a square matrix in which each row is the cyclic shift of the row above it, and the first row is the cyclic shift of the last row. For a circulant matrix, each column is the downward cyclic shift of the column on its left, and the first colum
of a PC matrix. The degree of a node is the number of edges (lines) connected to it in a Tanner graph. The decoding algorithms for LDPC codes were discovered independently, and they come under different names. The most common ones are the BP algorithm, the message passing algorithm (MPA), and the sum-product algorithm. In a BP algorithm, the probabilistic mes- sages are iteratively exchanged between variable and check nodes until either a valid code- word is found or the maximum number of iterations is exceeded. The LDPC codes can be decoded using message passing or BP on the bipartite Tanner graph where, the CNDs and VNDs communicate with each other, successively passing revised estimates of the associated LLR in each decoding iteration. The bit reliability metric is defined as LLR(bi) = log p(bi = 0) - log(bi = 1) where bi denotes the ith bit in the received codeword. If LLR > 0, it implies that bi = 0 is more likely, while LLR <0 implies that bi = 1 is more probable. As an example, assume that codeword 0, 0, 0, 0, 0, 0, 0, 0, 0) is transmitted, and (0, 0, 0, 1, 0, 0, 0, 0, 0, 0) is received by decoder. Each valid 11-bit codeword C10) has the sum (modulo 2) of all bits equal to zero. The received vector does not satisfy this code constraint, indicating that there are errors present in the received codeword. Furthermore, assume that the decoder is provided with bit-level reliabil- ity metric in the form of probability (confidence in the received values) of being correct as 0.8,0.86, 0.7, 0.55, 1, 1, 0.8, ,0.98,0.68,0.99) From the soft information, it follows that bit C4 is the least reliable and should be flipped to bring the received codeword in com- pliance with code constraint. Using LLRs as messages, the hardware implementation has become much easier when compared to the message passing algorithm. The implementation complexity is further reduced by simplifying the process for updating the CNDs, which is the most complex part of the message passing algorithm. This algorithm is known as the min-sum algorithm.
An LDPC decoder can be implemented using serial, parallel, or par- tially parallel architectures. The performance of the LDPC decoder depends on various fac- tors such as decoder algorithm and architecture, quantization of LLRs and the maximum number of decoding iterations. The maximum number of decoding iterations used for the decoding process determines the data rate and latency of the LDPC decoder. After perform- ing maximum number of decoding iterations, the codeword is then estimated. In order to save decoder power consumption and to decrease the latency, a decoder design that verifies the codeword after each iteration and stops the decoding process when the estimated code- word is correct, is needed. If parity check is satisfied then the codeword is estimated at the beginning of the next iteration and the decoding process is stopped. 532 Chapter 4 4.1.7.4 NR Low Density Parity Check Coding 3GPP NR has taken a different approach to LDPC coding for the downlink and uplink traf- fic channels. In order to ensure support of a wide range of code rates with sufficient granu- larity and HARQ-IR, two base graphs with the structures that are explained later in this section have been adopted. Code extension of a PC matrix (lower triangular extension, which includes diagonal-extension as a special case) is used to support HARQ-IR and rate- matching. The 3GPP NR LDPC base graphs consist of five submatrices A,B,C,D,E as shown in Fig. 4.50. As depicted in the figure, matrix A corresponds to the systematic bits; B is a square matrix and corresponds to the parity bits. The first or last column of matrix B has a weight equal to one. The last row of B has a non-zero value and a weight equal to one. If there is a column with weight of one then the remaining columns contain a square matrix such that the first column has weight three. The columns after the weight three col- umn have a dual diagonal structure (i.e., main diagonal and off diagonal elements). If there is no column with weight one, B consists of only a square matrix
such that the first column has weight three, C is a zero matrix, D corresponds to single parity check rows, and E is an identity matrix for the base graph [55]. The rate matching for the LDPC code uses a circular buffer similar to LTE. The circular buffer is filled with an ordered sequence of systematic bits and parity bits. For HARQ-IR, each RV RV is assigned a starting bit location Si in the circular buffer. For HARQ-IR retransmission of RVi, the coded bits are read out sequentially from the circular buffer, start- ing with the bit location Si. Limited buffer rate matching is further supported. Before code block segmentation, LCRC = 24 TB-level CRC bits are attached to the end of the transport block. The value of LCRC was determined to satisfy the probability of misdetection of the TB with BLER < 10-6 as well as the inherent error detection of LDPC codes. LDPC code word Parity bits Systematic bits Base graph structure Figure 4.50 Structure of 3 GPP NR base graphs with dual diagonal property [55]. New Radio Access Physical Layer Aspects (Part 2) 533 The NR LDPC coding chain includes code block segmentation, CRC attachment, LDPC encoding, rate matching, and systematic-bit-priority interleaving. More specifically, code block segmentation allows very large transport blocks (MAC PDUs) to be divided into mul- tiple smaller sized code blocks that can be efficiently processed by the LDPC encoder/ decoder. The CRC bits are then attached for error detection purposes. When combined with the inherent error detection of the LDPC codes through the parity check equations, very low probability of undetected errors can be achieved. The rectangular interleaver with number of rows equal to the QAM order improves the performance by making systematic bits more reliable than parity bits for the initial transmission of the code blocks. The NR LDPC codes use a QC structure, where the PC matrix is defined by a smaller base graph. Each entry of the base graph represents either a zero matrix or a shifted ZXZ identity matrix, where a cyc
lic shift to the right of each row is applied. Unlike the LDPC codes specified in other wireless technologies, the NR LDPC codes have a rate-compatible structure, which means codewords with different rates can be generated by including a different number of parity bits, or equivalently by using a smaller subset of the full PC matrix. This is especially useful for communication systems employing HARQ-IR for retransmissions. Another advantage of this structure is that for higher rates, the PC matrix and the decoding complex- ity and latency are smaller. This is in contrast with the LTE turbo codes, which have con- stant decoding complexity and latency irrespective of the code rate [55]. The NR data channel supports two base graphs to ensure good performance and decoding latency can be achieved for the full range of code rates and information block sizes. The base graph 1 is optimized for large information block sizes and high code rates. It is designed for maximum code rate of 8/9 and may be used for code rates up to 0.95. The base graph 2 is optimized for small information block sizes and lower code rates. The lowest code rate for base graph 2 without using repetition is 1/5. This is significantly lower than that of the LTE turbo codes, which rely on repetition for code rates below 1/3. The NR LDPC codes can also achieve an additional coding gain at low-code rates, which makes them suitable for high reliability scenarios. From decoding complexity perspective, for a given number of input bits, it is beneficial to use base graph 2, since it is more com- pact and utilizes a larger lifting factor, that is, more parallelism, relative to base graph 1. The decoding latency is typically proportional to the number of non-zero elements in the base graph. Since base graph 2 has much fewer non-zero elements compared to base graph 1 for a given code rate, its decoding latency is significantly lower. In 3GPP NR, the input bit sequence is represented as c = where number of information bits to encode. The output LDPC-coded bits a
re denoted by do, d1 dN-1 where N = 66Z for base graph 1 and 50Z- for base graph 2, where the value of lifting factor Z is given in Table 4.7. A code block is encoded by the LDPC encoder based on the following procedure [7]: 534 Chapter 4 1. Find the set with index iLS in Table 4.7 which contains Zc. 2. Set dk-2Z = Ck, Ak 2Zc,..., K - 1. 3. Generate N 2Z parity bits W = H[c w] T =0. 4. The encoding is performed in GF(2). For base graph 1, matrix HBG (representing the base graph) has 46 rows and 68 columns. For base graph 2, matrix HBG has 42 rows and 52 columns. The elements of HBG matrices are given in [7]. The PC matrix H is obtained by replacing each element of HBG with a Z X Z matrix such that each ele- ment of value 0 in HBG is replaced by an all zero matrix 0 of size Z X Zc; and each ele- ment of value 1 in HBG is replaced by circular permutation matrix I(Pij) of size Z X Z, where i and j are the row and column indices of the element, and I(Pi.j) is obtained by circularly shifting the identity matrix I of size Z X Z to the right Pij times. The value of Pij is given by Pij = Vij mod Z and the value of Vi.j is given in [7]. 5. Set dk-2Zc = Wk-K, = K, N + 2Z - 1. The performance of the NR LDPC codes over an AWGN channel was evaluated using a normalized min-sum decoder, layered scheduling, and a maximum of 20 decoder iterations. Fig. 4.51 shows the required SNR to achieve certain BLER targets as a function of informa- tion block size K for code rate 1/2 and QPSK modulation. The results show that the NR LDPC codes provide consistently good performance over a large range of block sizes. For this code rate, base graph 2 is used for all K for which it is defined, that is, for K < 3840, while base graph 1 is used for larger values of K (note the discontinuity point in the curves at K = 3840). As shown in the figure, there is a small gap in performance at the block size where the transition occurs between base graph 1 and base graph 2 [55]. Target BLER = 10-1 Target BLER = 10-2 Target BLER = 10-3 Information block si
ze (bits) Figure 4.51 Performance of NR LDPC codes at code rate 1/2, QPSK modulation, and 20 iterations [55]. New Radio Access Physical Layer Aspects (Part 2) 535 4.1.7.5 Modulation Schemes and MCS Determination 3GPP NR supports various modulation schemes (QPSK, 16QAM, 64QAM, and 256QAM) for CP-OFDM in both downlink and uplink. For the DFT-s-OFDM, however, NR uses an additional modulation 2-BPSK in the uplink to achieve better efficiency for power ampli- fiers and lower PAPR in very low data rate cases. While 1024QAM can theoretically pro- vide 25% throughput gain relative to 256QAM, due to real-world implementation complexity and the need for a very high SINR levels to achieve acceptable BLER targets, it was not included in the NR modulation schemes. A constellation diagram is a graphical representation of the complex envelope of each possible symbol state. The power efficiency is related to the minimum distance between the points in the constellation. The bandwidth efficiency is related to the number of points in the constellation. The gray coding is used to assign groups of bits to each constellation point. In gray coding, adjacent constellation points differ by a single bit. The modulation mapping function takes input bit sequence b(i)b(i+1)--b(i+Qm - 1) and generates the corresponding complex-valued modulation symbol y(I+jQ) in the output with the value of V is chosen to achieve equal average power. More specifically, the NR supports the following modulation schemes depending on the channel conditions experienced by the users [6]: 2-BPSK: In this case, bit b(i) is mapped to complex-valued modulation symbol d(i) according to d(i) = 1/2[(1 - 2b(i)) j (j 2b(i))]exp(j(r/2)( mod 2)). BPSK: In this case, bit b(i) is mapped to complex-valued modulation symbol d(i) according to d(i) = 1/2[(1 - 2b(i)) +j(1 2b(i))]. QPSK: In this case, a pair of bits b(2i), b(2i + 1) is mapped to complex-valued modula- tion symbol d(i) according to d(i) = 1/2[(1 2b(2i)) -j(1 - 2b(2i + 1))]. 16QAM: In this case, bit quadruplet b(4i),b(
4i + 1), b(4i + 3) are mapped to complex-valued modulation symbol d(i) according to d(i) = 1/10{(1 - 2b(4i))[2 - (1 - 2b(4i + 2))] j(1 2b(4i + 1))[2 (1 2b(4i 64QAM: In this case, bit sextuplet b(6i), b(6i + 1), b(6i + 2), b(6i + 3), b(6i + 4), b(6i+5) are mapped to complex-valued modulation symbol d(i) according to = - + j(1 2b(6i + 1)) -(1-2b(6i+3))[2-(1-2b(6i+5))]]}. - 256QAM: In this case, bit octuplet b(8i),b(8i+1),b(8i+2),b(8i+3), b(8i + 4), b(8i + 5), b(8i + 6), b(8i+7) are mapped to complex-valued modulation sym- bol d(i) according to d(i) = 1/170{1 - 2b(8i)[8 1 - 1 -2b(8i + 4)[2 - 1 - b(8i +6)]] + 2b(8i + 1)[8 1 2b(8i 3)[4 2b(8i + 5)[2 - 1 - 2b(8i + 7)]]]} . To determine the modulation order, target code rate, and TBS in the PDSCH, the UE needs to read the 5-bit modulation and coding scheme field IMCS in the DCI to determine the mod- ulation order Qm and target code rate R based on the procedure that we defined in the Chapter 4 Noise-free modulation constellations Modulation constellations at 10 dB SNR Figure 4.52 Modulation constellations at noise-free and SNR = 10 dB conditions. previous section. The UE then use the number of layers V and the total number of allocated PRBs before rate matching NPRB to determine to the TBS. The UE may skip decoding of a TB in an initial transmission, if the effective channel code rate is higher than 0.95. The effective channel code rate is defined as the number of downlink information bits (including CRC bits) divided by the number of physical channel bits transmitted on PDSCH [9]. Fig. 4.52 shows the constellations of NR modulation schemes in noise-free and SNR = 10 dB conditions, which demonstrate the effect of achievable SNR at the receiver detector and the choice of modulation order for transmission. The concepts of MCS, code rate, TB, and TBS in 3GPP NR are similar to those of 3GPP LTE. In NR, the DL-SCH and UL-SCH MCS and code rate for transmission are determined by predefined tables given in [9]. However, TBS determination in NR is more complicated than that of LT
E. Unlike LTE where all possible TBSs were precalculated and listed in the MCS table, in NR, the TBS determination process is described as a procedure that has been illustrated in Fig. 4.53. As shown in the flow chart, the initial input to this algorithm is Ninfo; however, to determine this value, the following calculations are necessary [9]: DM-RS overhead The number of REs allocated for The number of symbols of the PDSCH The number of REsfor DM - RSper PRB The overhead configured by higher - layer PDSCH withina PRB allocation within the slot inthe scheduled duration including the overhead parameter Overhead in PDSCH - Serving CellConfig of the DM - RS CDM groups without data = min(156,NRE) The totalnumber of REs allocated for PDSCH The total number of allocated PRBs for the UE Ninfo = NRE Intermediate number of information bits Target code rate Modulation order Number of layers For downlink shared channel, the supported modulation schemes include QPSK, 16QAM, 64QAM, and 256QAM. After detecting the CSI-RS and estimating the channel quality, UE New Radio Access Physical Layer Aspects (Part 2) 537 Calculate Ninfo N info 3834 = max = log (Ninic -24) -5 CalculateN Calculate Ninfo N info 3840, round 2" -24 Determine TBS based on Table 17 R 1/4 Nax +24 TBS=8C N info 8424 TBS 8 Nox + 24 C Nee+24 8C 24 Figure 4.53 TBS determination procedure in NR [30]. reports the CQI to the gNB, which includes the information such as modulation scheme and coding rate. To balance the overhead and the granularity of CQI indication, two CQI/MCS tables are defined for eMBB, where the maximum order of modulation in one CQI/MCS table is 64QAM and in another table is 256QAM (see MCS Tables I and II in Table 4.16). The network will instruct the UE to select CQI/MCS table through RRC signaling. The third MCS table is meant for URLLC use cases where the target BLER is 10-5, , which is signaled to UE when the CRC of the PDCCH is scrambled with MCS-C-RNTI. This MCS table was designed to allow single transmissions to the UEs with delay sensitive a
pplications to ensure maximum likelihood of correct reception. Given the modulation order, the number of resource blocks scheduled, and the scheduled transmission duration, the number of available resource elements can be computed. From this number, the resource elements used for DM-RS are subtracted. A constant, configured by higher layers, modeling the overhead of other signals such as CSI-RS or SRS, is also subtracted. The resulting estimate of resource elements available for data allocation is then, together with the number of transmission layers, the modulation order, and the code rate obtained from the MCS table, are used to calculate an intermediate number of information bits. This intermediate number is then quantized to obtain the final transport block size, Table 4.16: MCS index tables 1/2/3 for physical downlink shared channel [9]. Modulation Target Code Spectral Modulation Target Code Spectral Modulation Target Code Spectral Index Order Qm Rate 1024 R Efficiency Order Qm Rate 1024XR Efficiency Order Qm Rate 1024 X R Efficiency MCS Table I MCS Table II MCS Table III 0.2344 0.2344 0.0586 0.3066 0.3770 0.0781 0.3770 0.6016 0.0977 0.4902 0.8770 0.1250 0.6016 1.1758 0.1523 0.7402 1.4766 0.1934 0.8770 1.6953 0.2344 1.0273 1.9141 0.3066 1.1758 2.1602 0.3770 1.3262 2.4063 0.4902 1.3281 2.5703 0.6016 1.4766 2.7305 0.7402 1.6953 3.0293 0.8770 1.9141 3.3223 1.0273 2.1602 3.6094 1.1758 2.4063 3.9023 1.3281 2.5703 4.2129 1.4766 2.5664 4.5234 1.6953 2.7305 4.8164 1.9141 3.0293 5.1152 2.1602 3.3223 682.5 5.3320 2.4063 3.6094 5.5547 2.5664 3.9023 5.8906 2.7305 4.2129 6.2266 3.0293 4.5234 6.5703 3.3223 4.8164 6.9141 3.6094 5.1152 916.5 7.1602 3.9023 5.3320 7.4063 4.2129 5.5547 Reserved 4.5234 Reserved Reserved Reserved Reserved Reserved Reserved Reserved Reserved Reserved New Radio Access Physical Layer Aspects (Part 2) Table 4.17: NR transport-block size for Ninfo 3824 [9]. Index (bits) Index (bits) Index (bits) Index (bits) Index (bits) Index (bits) Index (bits) while at the same time ensuring byte-aligned code bloc
ks, and that no padding bits are needed in the LDPC coding. The quantization also results in the same transport block size being obtained, even if there are small variations in the amount of resources allocated, a property that is useful when scheduling retransmissions on a different set of resources than the initial transmission (see Table 4.17). In the case of a retransmission, the transport block size by definition, is unchanged and there is no need to signal this information. Instead, the reserved entries represent the modulation scheme (QPSK, 16QAM, 64QAM, or if config- ured 256QAM), which allows the scheduler to use an (almost) arbitrary combination of resource blocks for the retransmission. The use of the reserved entries assumes that the UE properly received the control signaling for the initial transmission, if this is not the case, the retransmission should explicitly indicate the transport block size [14]. 4.1.8 HARQ Operation and Protocols 4.1.8.1 HARQ Principles While ARQ error control mechanism is simple and provides high transmission reliability, the throughput of ARQ schemes drop rapidly with increasing channel error rates, and the latency, due to retransmissions, could be excessively high and intolerable for some delay-- sensitive applications. Systems using forward error correction (FEC), on the other hand, can maintain constant throughput regardless of channel error rate. However, FEC schemes have some drawbacks. High reliability is hard to achieve with FEC and requires the use of long and powerful error correction codes that increase the complexity of implementation. 540 Chapter 4 The drawbacks of ARQ and FEC can be overcome, if the two error control schemes are properly combined. In order to achieve increased throughput and lower latency in packet transmission, hybrid ARQ (HARQ) scheme was designed to combine ARQ error-control mechanism and FEC coding. A HARQ system consists of a FEC subsystem contained in an ARQ system. In this approach, the average number of retransmissions is reduced by us
ing FEC through correction of the error patterns that occur more frequently; however, when the less frequent error patterns are detected, the receiver requests a retransmission where each retransmission carries the same or some redundant information to help the packet detection. The HARQ uses FEC to correct a subset of errors at the receiver and rely on error detection to detect the remaining errors. Most practical HARQ schemes utilize CRC codes for error detection and some form of FEC for correcting the transmission errors. The HARQ schemes are typically classified into two groups depending on the content of subsequent retransmis- sions, as follows [15]: HARQ with chase combining: In this HARQ scheme, the same data packet is transmitted in all retransmissions. Soft combining may be used to improve the reliability. The blocks of data along with the CRC code are encoded using FEC encoder before transmission. If the receiver is unable to correctly decode the data block, a retransmission is requested. When a retransmitted coded block is received, it is combined with the previously received block corresponding to the same information bits (using for example maximum ratio com- bining method) and fed to the decoder. Since each retransmission is an identical replica of the original transmission, the received Eb/No, that is, the energy per information bit divided by the noise spectral power density, increases per each retransmission, improving the likelihood of correct decoding. In chase combining HARQ, the redundancy version of the encoded bits is not changed from one transmission to the next; therefore, the punctur- ing pattern remains the same. The receiver uses the current and all previous HARQ trans- missions of the code block in order to decode the information bits. The process continues until either the information bits are correctly decoded and pass the CRC test or the maxi- mum number of HARQ retransmissions is reached. When the maximum number of retransmissions is reached, the MAC sublayer resets the process an
d continues with fresh transmission of the same code block. A number of parallel channels for HARQ can help improve the throughput as one process is awaiting an acknowledgment; another process can utilize the channel and transmit subpackets. Fig. 4.54 illustrates the operation of chase combining HARQ (HARQ-CC) scheme and how the retransmission of the same coded bits changes the combined energy per bit Eb while maintaining the effective code rate intact. 2. HARQ with incremental redundancy: In this HARQ scheme, additional parity bits are sent in subsequent retransmissions. Therefore, after each retransmission, a richer set of parity bits is available at the receiver, improving the probability of reliable decoding. In incremental redundancy schemes, however, information cannot be recovered from parity New Radio Access Physical Layer Aspects (Part 2) 541 Information bits CRC insertion Information bits Minimum code rate = R Channel coded bits Transmitted bits Transmitted bits Transmitted bits Transmitted bits Initial transmission 1st retransmission 2nd retransmission Nth retransmission Input bits to decoder Combined energy Effective coding rate Chase combining HARQ Information bits CRC insertion Information bits Minimum code rate = R Channel coded bits Puncturing to generate different redundancy versions Initial transmission 1st retransmission 2nd retransmission redundancy version 1 redundancy version 2 redundancy version 3 Input bits to decoder Combined energy Effective coding rate 3/2 R Incremental redundancy HARQ Figure 4.54 Illustration of the chase combining and incremental redundancy HARQ schemes 15]. bits alone. In incremental redundancy HARQ (HARQ-IR) scheme, a number of coded bits with increasing redundancy are generated and transmitted to the receiver when a retransmission is requested to assist the receiver with the decoding of the information bits. The receiver combines each retransmission with the previously received bits belonging to the same packet. Since each retransmission carries additional parity b
its, the effective code rate is lowered by each retransmission as shown in Fig. 4.54. The IR is based on low-rate code and the different redundancy versions are generated by punc- turing the channel coder output. In the example shown in Fig. 4.54, the basic code rate is R and one-third of the coded bits are transmitted in each retransmission. Aside from increasing the received signal to noise ratio Eb/No by each retransmission due to 542 Chapter 4 combining, there is a coding gain 13 attained as a result of each retransmission. It must be noted that chase combining is a special case of HARQ-IR where the retransmissions are identical copies of the original coded bits. 4.1.8.2 UE Processing Times, HARQ Protocol and Timing 3GPP NR uses an asynchronous HARQ-IR scheme in the downlink and uplink. The gNB provides the UE with the HARQ-ACK feedback timing either dynamically in the DCI or semi-statically through RRC configuration messages. The gNB schedules each uplink trans- mission and retransmission using the uplink grant on DCI. In LTE, the basic mode of opera- tion for uplink HARQ is synchronous retransmission, which can be used to reduce the scheduling overhead for retransmissions. In this case, HARQ ACK/NACK is carried on PHICH as a short and efficient message. In NR, asynchronous HARQ is supported. In order to support asynchronous HARQ, a straightforward solution for the gNB is to send an explicit uplink grant through PDCCH for the retransmission in the same way that is done for transmissions in LTE. In some sense, the explicit grant can imply an implicit ACK/ NACK. For example, an explicit scheduling grant of a retransmission may imply a NACK for the initial transmission. The maximum number of HARQ processes in the downlink and uplink per cell is 16. The number of HARQ processes is separately configured for the UE for each cell by RRC parameter mnrofHARQ-processesForPDSCH. In the absence of any configuration, the UE may assume a default number of 8 HARQ processes. The UE must provide a valid HARQ-ACK message, if
the first uplink symbol of PUCCH conveying the HARQ-ACK information, as identified by HARQ-ACK timing parameter K1 and the assigned PUCCH resource including the effect of the timing advance, starts on or after symbol L1, that is, the next uplink symbol with its cyclic prefix starting after Tproc = 2192(N1 dx)k2-"Tc (processing time) following the end of the last symbol of the PDSCH carrying the transport block being acknowledged. As shown in Fig. 4.55, parameter N1 is based on the numerology and corresponds to where UPDSCH' and UUL correspond to the subcarrier spacing of the PDCCH scheduling, PDSCH transmission, and the uplink channel on which the HARQ-ACK is transmitted, respectively. As shown in Table 4.18, the value of parameter N1 further depends on the PDSCH DM-RS pattern and whether additional DM-RS is used as well as UE PDSCH pro- cessing capability. The value of parameter dx is dependent on the PDSCH mapping type (A or B), UE PDSCH processing capability, and the number of PDSCH symbols [9]. The tim- ing relationship between HARQ-ACK and PDSCH data transmission depends on the value of the above parameters and is depicted in Fig. 4.55. The coding gain is defined as the difference between Eb/No required to achieve a given bit error rate in a coded system and the Eb/No required to achieve the same BER in an uncoded system. New Radio Access Physical Layer Aspects (Part 2) 543 PDSCH (14 symbols) Additional gap delay between scheduling DCI Gap already delay between scheduling DCI Figure 4.55 Timing relationship between PDSCH and HARQ-ACK transmission [9]. Table 4.18: Physical downlink shared channel processing times [9]. PDSCH Decoding Time N1 (OFDM Symbols) PDSCH Decoding Time N1 (OFDM Symbols) PDSCH Processing Capability 1 PDSCH Processing Capability 2 No Additional PDSCH Additional PDSCH No Additional PDSCH DM-RS Configured (Front-Loaded DM-RS) DM-RS Configured DM-RS Configured (Front-Loaded DM-RS) 9 (in Frequency Range 1) In the case of carrier aggregation, the multi-carrier nature of the physical layer is o
nly exposed to the MAC sublayer for which one HARQ entity is required per serving cell. In both uplink and downlink, there is one independent HARQ entity per serving cell and one TB is generated per TTI in the absence of spatial multiplexing. Each TB and its associated HARQ retransmissions are mapped to a single serving cell [11]. Fig. 4.56 depicts HARQ-ACK timing requirements in a cross-carrier scheduling scenario when a UE receives and transmits to from/to two gNBs. 544 Chapter 4 Cross-carrier scheduling Control CC1 (30 kHz SCS) Channel 30.26 us UE processing time (10 symbols + TA) UL CC1 DL CC1 CC2 (30 kHz SCS) Slot (500 us) DL CC2 Same-carrier scheduling CC1 (30 kHz SCS) 30.26 us UE processing time (10 symbols + TA) Control CC2 (30 kHz SCS) channel Slot (500 us) Figure 4.56 UE processing time considering timing difference between different cells [65]. In the NR downlink, retransmissions are scheduled in the same way as new data, that is, they may occur at any time and at an arbitrary frequency location within the downlink cell bandwidth. The scheduling assignment contains the necessary HARQ-related control signal- ing such as process number, new-data indicator, CBGTI, and CBGFI in case of CBG-based retransmission as well as information to handle the transmission of the acknowledgment in the uplink such as timing and resource indication. Upon receiving a scheduling assignment in the DCI, the receiver would attempt to decode the TB after soft combining with previous retransmissions. Since transmissions and retransmissions are scheduled using the same framework, the UE needs to know whether the transmission is a new transmission, in which case the soft buffer should be flushed, or a retransmission, in which case soft combining should be performed. Therefore, an explicit new-data indicator is included for the scheduled TB as part of the scheduling information transmitted in the downlink. The new-data indica- tor is toggled for a new TB. Upon reception of a downlink scheduling assignment, the UE checks the new-dat
a indicator to determine whether the current transmission should be soft combined with the received data currently in the soft buffer for the HARQ process or the soft buffer should be cleared [9,11]. The NR uplink uses asynchronous HARQ protocol in the same way as downlink. The HARQ-related information including process number, new-data indicator, CBG-based retransmission (if configured), and the CBGTI are included in the scheduling grant. The uplink CBGTI is used in the same way as in the downlink, that is, to indicate the CBGs that need to be retransmitted in the case of CBG-based retransmission. Note that no CBGFI is needed in the uplink as the soft buffer is located in the gNB which can decide whether to flush the buffer or not based on the scheduling decisions. The use of HARQ in NR allows reliable delivery of layer-1 packets between peer entities. Each HARQ process supports one TB when the physical layer is not configured for New Radio Access Physical Layer Aspects (Part 2) 545 Downlink data Downlink data assignment Uplink grant Uplink data Figure 4.57 NR HARQ protocol timing [9]. downlink/uplink spatial multiplexing; otherwise, each HARQ process supports one or multi- ple TBs. The NR HARQ operation and timing are illustrated in Fig. 4.57, where the para- meters used in the figure are defined as follows [8,9]: K denotes the delay between downlink grant and corresponding PDSCH data reception. K1 is the delay between PDSCH data reception and the corresponding ACK/NACK transmission on the uplink. K2 denotes the delay between uplink grant reception in downlink and the uplink data transmission on PUSCH. K3 is the delay between ACK/NACK reception in the uplink and the corresponding retransmission of downlink data on PDSCH. The parameters Ko, K1, and K2 are signaled via DCI and if K1 0, a self-contained subframe/slot is configured. The choice of N1 value has a significant impact on the UE processing time, where N1 is defined as the number of OFDM symbols from the end of PDSCH reception to the start of the correspo
nding ACK/NACK transmission from UE (as shown Fig. 4.58). Depending on the frame structure, some UE data processing can be done in parallel with PDSCH reception in order to allow faster HARQ ACK/NACK transmission. For example, for front-loaded DM-RS pattern and slot-based scheduling with subcarrier spacing of 15 kHz, it can be shown that PDCCH processing, the demodulation/detection of the symbols other than the last symbol of PDSCH can be performed in T1 = TFFT + Tdemodulation + Tdecode + TUL-HARQ + Tother where TFFT, Tdemodulation, Tdecode, TUL-HARQ, and Tother denote the processing time for FFT/IFFT per symbol, the demodulation time for one symbol, the decoding time for one symbol code blocks, the processing time for uplink ACK/NACK, and the other implementation-specific processing times, respectively. For the non-slot-based scheduling (e.g., two-symbol mini-slot) and subcarrier spacing kHz, it can be shown that the UE processing requirement T2 = TPDCCH + 2Tdemodulation Tdecode + TUL-HARQ Tother where TPDCCH denotes the 546 Chapter 4 PDCCH DM-RS DM-RS PDSCH PDCCH processing (FFT, demodulation, decoding) PDSCH FFT Channel estimation Demodulation Decoding Uplink processing N1 OFDM symbols 14 OFDM symbols N1 for (front-loaded DM-RS) initial transmission N1 (3 OFDM symbols) 2 OFDM symbols) Decoding Other TB in 2 Initial transmission processing symbols N1 for initial transmission Other Retransmission Decoding TB in 14 OFDM symbols processing N1 for One slot retransmission DM-RS PDSCH DM-RS N = 13 OFDM symbols Short-PUCCH for HARQ-ACK DM-RS PDSCH DM-RS T1: control element N1 = 13 OFDM symbols processing, etc. T2: LDPC decoding T3: HARQ-ACK feedback processing Figure 4.58 UE processing time components (front-loaded DM-RS, slot-based scheduling, and 15 kHz SCS). New Radio Access Physical Layer Aspects (Part 2) 547 processing time for PDCCH including decoding, demodulation, and parsing. The above pro- cessing time calculation was done under certain conditions which may vary in different sce- narios. There is no limit fo
r scheduling transmission/retransmission; however, the initial transmission is slot-based, and the retransmissions may be non-slot based as shown in Fig. 4.58. In the following example, we assume subcarrier spacing is 15 kHz, D1 is a slot-based downlink scheduling period with front-loaded DM-RS and D2 is a non-slot-based schedul- ing period. If D2 is the initial transmission then the processing time is calculated as T2; nev- ertheless, if D2 is the retransmission of D1 then we need to perform TB decoding with 14 OFDM symbols; thus the processing time for retransmission is shown to be TPDCCH + demodulation + 13Tdecode + TUL-HARQ + Tother, meaning that for the same con- ditions, the processing time for initial transmission and retransmission are different [65]. As we mentioned earlier, the maximum number of HARQ processes per carrier supported in NR is 8 or 16. For continuous downlink transmission at the peak data rate, the minimum number of HARQ processes is min(NDL-HARQ) = K + K3 + 2Td/TTIDL in which T denotes the transmission delay. The required number of HARQ processes may vary depend- ing on UE HARQ processing capability, numerology, and network configurations. The determination of the number of HARQ processes is up to gNB scheduler and thus signaled via the DCI. To reduce the overhead, the gNB can semi-statically configure a UE with a smaller number of HARQ processes than 16 per bandwidth part. In order to reduce the latency due to HARQ retransmissions and to avoid retransmission of the entire TB and the performance degradation of HARQ due to large transport block size, NR defined CBG- based transmission and HARQ operation which supports single-/multi-bit HARQ-ACK feedback in Rel-15. The CBG-based (re)transmissions are only allowed for the same TB of a HARQ process. The CBG can include all code blocks of a TB regardless of the size of the TB. In such conditions, the UE reports single HARQ-ACK bits for the TB. The CBG can include one code block and its granularity is configurable. The UE is semi-statically con
figured by RRC signaling to enable CBG-based retransmission. When the CSI request field in a DCI triggers a CSI report(s) on PUSCH, the UE is required to provide a valid CSI report(s), if the first uplink symbol to carry the corresponding CSI report(s), including the effect of the timing advance, starts no earlier than at symbol lcsi, and if the first uplink symbol to carry the corresponding CSI report, including the effect of the timing advance, starts no earlier than at symbol l'CSI. The reference symbol lcsi is defined as the next uplink symbol with starting Tproc-CSI = 21921cs1k2-"Tc after the end of the last sym- bol of the PDCCH triggering the CSI report. The reference symbol l'CSI is defined as the next uplink symbol starting Tproc-CSI after the end of the last symbol of the lat- est aperiodic CSI-RS resource for channel measurements, aperiodic CSI-IM used for interference measurements, and aperiodic NZP CSI-RS for interference measurement, when aperiodic CSI-RS is used for channel measurement for the triggered nth CSI report [9]. 548 Chapter 4 Table 4.19: NR channel state information computation delay requirements 1 and 2 [9]. Delay Requirement 1 Delay Requirement 2 Z1 (Symbols) Z1 (Symbols) Z2 (Symbols) As a result, lcsi and l'CSI are defined as lcsi = maxlcsi(m) and l'CSI maxl'ssi(m) , where m=0,1,...NNSSI NCSI is the number of updated CSI report(s), lcsi(m) and l'csi(m) corresponds to the mth updated CSI report. The values of lcsi(m) and l'csi(m) are set to 11, l'1, 12, or l'2 depending on the CSI computation delay requirements as shown in Table 4.19, u corresponds to the where the UPDCCH corresponds to the subcarrier spacing of the PDCCH in which the DCI was transmitted, and HUL corresponds to the subcarrier spacing of the PUSCH in which the CSI report is transmitted, and HCSI-RS corresponds to the minimum subcarrier spacing of the aperiodic CSI-RS triggered by the DCI. 4.1.8.3 Semi-static/Dynamic Codebook HARQ-ACK Multiplexing The NR supports multiplexing of HARQ acknowledgments for multiple transpor
t blocks into an acknowledgment bitmap, when multiple TBs need to be acknowledged at the same time or alternatively multiple acknowledgments need to be transmitted in the uplink at the same time in carrier aggregation and CBG-based retransmission scenarios. This bitmap can be signaled either via a semi-static codebook or a dynamic codebook both configured through RRC signaling. The semi-static codebook can be viewed as a matrix consisting of a time-domain dimension and a component-carrier, CBG, or MIMO layer dimension, both of which are semi-statically configured. The size in the time domain is given by the maximum and minimum HARQ acknowledgment timings and the size in the carrier domain is given by the number of simultaneous transport blocks or CBGs across all component carriers. An example is shown in Fig. 4.59, where the acknowledgment timings are one, two, three, and four slots, respectively, and three carriers, one with two TBs, one with one TB, and one with four CBGs, are configured. Since the codebook size is fixed, the number of bits to transmit in a HARQ-ACK is known and an appropriate format for the uplink control signal- ing can be selected. Each entry in the matrix represents successful/unsuccessful outcome of the decoding of the corresponding downlink transmission. A NACK is sent in position of unused transmission opportunities in the codebook, resulting in improved robustness in the case of missed downlink assignment where the gNB can retransmit the missing TB or the CBG [14]. Time span of the HARQ-ACK codebook Time span of the HARQ-ACK codebook CC1 with one TB per slot (0,4) (5,8) (11,16) CC2 with up to two TBs per (6,8) (12,16) CC3 with one TBs and CBG- (1,4) (9,10) (13,16) based retransmission (7,8) (10,10) ACK/NACK ACK/NACK NACK ACK/NACK Fixed-sized HARQ-ACK report ACK/NACK ACK/NACK (2,4) (14,16) ACK/NACK ACK/NACK ACK/NACK ACK/NACK ACK/NACK (3,4) (8,8) ACK/NACK ACK/NACK ACK/NACK ACK/NACK Unscheduled transmission is (4,4) (16,16) set to NACK (cDAl,tDAI) Comparing cDAI and tDAI The UE knows 17 AC
K/NACKs indicates which transmission is are expected by gNB missing Figure 4.59 Illustration of the semi-static and dynamic codebooks (example) [ 14]. 550 Chapter 4 One drawback of the semi-static codebook is the potentially large size of the HARQ feed- back. For a small number of component carriers and no CBG-based retransmissions, this may not be a problem; however, if a large number of carriers or CBGs are configured and only a fraction of them are used, the semi-static codebook will become inefficient. To address the drawback of a potentially large semi-static codebook size in some scenarios, NR also supports a dynamic HARQ-ACK codebook, which is the default HARQ-ACK codebook, unless the system is differently configured. With a dynamic codebook, only the acknowledgment information for the scheduled carriers is included in the HARQ feedback, as opposed to all carriers as is the case for a semi-static codebook. Hence, the size of the codebook may dynamically vary as a function of the number of the scheduled carriers. This would reduce the size of the HARQ acknowledgment message. A dynamic HARQ-ACK codebook would be straightforward option, if there were no errors in the downlink control signaling. However, in the presence of an error in the downlink control signaling, the UE and the gNB may have different understanding of the number of scheduled carriers, which would lead to an incorrect codebook size and possibly corrupted HARQ feedback for all carriers. As an example, assume that a device is scheduled for downlink transmission in two consecutive slots, but the PDCCH of the first slot is not received; thus the scheduling assignment for the first slot is missing. In this case, the UE will transmit an acknowledg- ment for the second slot only, while the gNB expects to receive acknowledgments for two slots. To mitigate such cases, NR supports DAI which is included in the DCI containing the downlink assignment. The DAI field is further divided into two parts, a counter DAI (cDAI) and, in the case of carrier aggrega
tion, a total DAI (tDAI). The cDAI included in the DCI indicates the number of scheduled downlink transmissions up to the point that the DCI was received in a carrier first, time second order. The tDAI included in the DCI indi- cates the total number of downlink transmissions across all carriers up to this point in time, that is, the highest cDAI at the current point in time. The cDAI and tDAI are represented with decimal numbers with no limitation. In practice, 2 bits are used for each and the num- bering is calculated in modulo four. This can be compared with the semi-static codebook which would require a fixed number of entries regardless of the number of active transmis- sions. If one transmission on a component carrier is lost, without the DAI mechanism, this would result in mismatched codebooks between the UE and the gNB; however, as long as the device receives at least one component carrier, it knows the value of the tDAI, hence the size of the codebook at this point in time. Furthermore, by checking the values received for the cDAI, it can conclude which component carrier was missed and that a negative acknowledgment should be assumed in the codebook for this position [8]. There are two different types of codebook determination algorithms referred to as Type 1 and Type 2. Each of these types is divided into two cases depending on whether the HARQ-ACK is reported on PUCCH or PUSCH, whose usage are configured and signaled through RRC parameters. The codebook determination algorithm types and the associated RRC parameters are summarized in Table 4.20. New Radio Access Physical Layer Aspects (Part 2) 551 Table 4.20: HARQ-ACK codebook determination [8]. Codebook Determination Type Condition CBG-based HARQ-ACK codebook determination PDSCH-CodeBlockGroupTransmission = ON Type-1 HARQ-ACK codebook determination pdsch-HARQ-ACK-Codebook = semistatic Type-1 HARQ-ACK codebook in PUCCH Type-1 HARQ-ACK codebook in PUSCH Type-2 HARQ-ACK codebook determination pdsch-HARQ-ACK-Codebook = dynamic Type-2 HARQ-ACK codebook in
PUCCH PDSCH-CodeBlockGroupTransmission = OFF Type-2 HARQ-ACK codebook in PUSCH 4.1.9 Downlink MIMO Schemes The new radio downlink control and traffic channels rely on DM-RSs to facilitate coherent detection where a UE can assume that the DM-RSs are jointly precoded with the data. In NR Rel-15, downlink multi-antenna precoding is transparent to the UE and the network can apply any transmitter-side precoding without informing the device about the selected preco- der. The specification impact of downlink multi-antenna precoding is therefore mainly related to the measurements and reporting mechanisms conducted by the device to support network selection of precoder for downlink transmissions. These precoder-related measure- ments and reporting are part of the more general CSI reporting framework based on report configurations which was described in previous sections. A CSI report may consist of rank indicator (RI), precoder-matrix indicator (PMI), and CQI indicating a suitable transmission rank, precoding matrix given the selected rank, coding and modulation scheme given the selected precoder matrix, respectively, from the device perspective. As we mentioned earlier, the reported PMI is a suitable precoder matrix from the UE perspective to be used for downlink transmissions. Each PMI value corresponds to a specific precoder matrix or codebook. Note that the device selects PMI based on a certain number of antenna ports, given by the number of antenna ports of the configured CSI-RS associated with the report configuration, and the selected rank. There is at least one code- book for each valid combination of antenna ports and rank. It must be understood that the suggested codebooks by the UE do not compel the gNB, in practice, not to use another pre- coding matrix for downlink transmission to the device. The MU-MIMO schemes typically require detailed knowledge of the channel experienced by each device at the gNB compared to SU-MIMO precoding and transmission to a single device. Therefore, NR defines two types of CSI that
differ in the structure and size of the precoding matrices (codebooks), that is, Type I CSI and Type II CSI. Type I CSI primarily targets scenarios where a single user is scheduled within a given time/frequency resources potentially with transmission of a relatively large number of layers in parallel (high-order spatial multiplexing) and Type II CSI mainly targets MU-MIMO scenarios with multiple 552 Chapter 4 devices being scheduled simultaneously within the same time/frequency resources but with only a limited number of spatial layers (maximum of two layers) per scheduled device. The codebooks for Type I CSI are relatively simple and used to focus the transmit energy at the target receiver. Inter-layer interference is assumed to be primarily handled by utilizing multiple receive antennas and advanced receiver architectures. In contrast, the codebooks for Type II CSI are significantly more extensive, allowing the PMI to provide channel infor- mation with much higher spatial granularity. The more extensive channel information allows the network to select a downlink precoder that not only focuses the transmitted energy at the target device but also limits the interference to other devices simultaneously scheduled on the same time/frequency resources. The higher spatial granularity of the PMI feedback is made possible with significantly higher signaling overhead. While a PMI report for Type I CSI may consist of a few tens of bits, the PMI report for Type II CSI may con- sist of several hundred bits. Therefore, Type II CSI is mainly applicable for low-mobility scenarios where the feedback periodicity in time can be reduced. 4.1.9.1 Capacity of MIMO Channels A generic MIMO system consists of a MIMO transmitter with NTX transmit antennas, a MIMO receiver with NRX receive antennas, and NRX X NTX paths or channels between trans- mit and receive antennas. Let xk(t) denote the transmitted signal from the kth transmit antenna at time t then the received signal at the 1th antenna can be expressed as y1(t) = hik(t)*xx(t) + n
i(t) where hik(t) and ni(t) are the channel impulse response between the kth transmit antenna and 1th receive antenna and the additive noise at the 1th receive antenna port, respectively. The preceding equation can be written in the frequency- domain as Y1(w)=Hik(w)Xk(w)+N1(w).If x(w) = (X1(w),X2(w), y(w) Y1(w), Y2(w), , YNRx(w)]T, and = = transform vectors of Xk(t), y1(t), and n/(t), respectively then y(w) H(w)x(w) + n(w) where H(w) is an NRX X NTX channel matrix with entries. Assuming a linear time-invariant MIMO channel, the channel input-output relationship can be further described in the discrete time-domain as follows: OnM - 1</NRX where xx(n)lk=1,2,...N, and yl(n)|1=1,2,...NRX represent the channel input and output time- domain signals, respectively. In the case of time-varying channels, the preceding equation can be written as y1(t) = hik(t,T)XK(T) + ni(t) where hik(t,T) denotes the time- varying impulse response of the 1kth channel. The matrix form of MIMO channel output in frequency-domain sampled at single frequency Wm can be written as y(wm) = H(wm)X(wm) + n(wm). In an OFDM system, the signal New Radio Access Physical Layer Aspects (Part 2) 553 processing is inherently performed in frequency domain. The OFDM transforms a frequency-selective fading channel to a flat-fading channel when considering narrowband orthogonal subcarriers. In such system, the MIMO signal processing can be performed at each subcarrier. This is the main reason for suitability of MIMO extension to an OFDM sys- tem. When MIMO processing is performed at each subcarrier, the MIMO channel input-output relationship can be demonstrated as y = Hx + n, where the channel between the transmitter and the receiver is typically modeled as a finite impulse response (FIR) fil- ter. In this case, each tap is typically a complex-valued Gaussian random variable with exponentially decaying magnitudes. The tap delays correspond to the RMS delay spread and the channel type (e.g., low-delay spread or flat fading, high-delay spread or frequency- select
ive fading). There is a new realization of the channel at every transmitted packet, if the channel remains invariant for the duration of the packet; otherwise, the variation of the channel is explicitly modeled in the signal detection. As mentioned earlier, there are NRX X NTX paths between the transmitter and the receiver where each channel is the sum of several FIR filters with different delay spreads. The channels may or may not be correlated. The MIMO schemes can be used with non-OFDM systems when the channel is modeled as flat fading such that yi(nT) = h1kxk(nT) n(nT). An important question is to what extent MIMO techniques can increase the throughput and improve the reliability of the wireless communication systems. This question can be answered by calculating the informa- tion theoretic capacity of a single-input-single-output (SISO) channel and comparing it with the capacity of single-input-multiple-output (SIMO), multiple-input-single-output (MISO), and MIMO channels. For a memoryless SISO channel (i.e., one transmit and one receive antenna), the channel capacity is given by CSISO = log2(1+y/h|2) where h is the normalized complex-valued gain/attenuation of a fixed wireless channel or that of a particular realization of a random channel and r = Es/No denotes the SNR at the receive antenna port. As the number of receive antennas increases, the statistics of channel capacity improve. Using NRX receive antennas and one transmit antenna, a SIMO system is formed with a capacity given by (when the channel is unknown to the transmitter) where hi is the gain of the ith channel corresponding to the ith receive-antenna. Note that increasing the value of NRX results in a logarithmic increase in average channel capacity. It can be shown that knowledge of the channel at the transmit- ter in SIMO cases does not provide any capacity benefit. In the case of MISO or transmit diversity, where the transmitter does not typically have knowledge of the channel, the capacity is given by CMISO = log2(1 y/NTx |hil2. It is noted f
The capacity of the MIMO channel can be calculated under various conditions and different assumptions. Depending on whether the receiver has perfect channel knowledge or whether the channel exhibits a flat fading or frequency-selective fading behavior, different expres- sions for the channel capacity can be obtained. The MIMO channel capacity can be ana- lyzed under two different assumptions: (1) transmitter has no channel knowledge and (2) the transmitter has perfect channel knowledge through feedback from the receiver or reci- procity of the downlink and uplink channels. Let us denote by E the NTX X NTX covariance matrix of the channel input vector X and let further assume that the channel is unknown to the transmitter then it can be shown that the MIMO channel capacity can be written as CMIMO = log2 (det[I+HEH") where tr(E) V V ensures the total signal-power does not exceed a certain limit. It can be shown that for equal transmit power and uncorrelated sources E = (y/NTx)I. This is true when the channel matrix is unknown to the transmitter and the input signal is Gaussian-distributed, maximizing the mutual information. If the receiver measures and sends channel quality feedback or CSI to the transmitter, the covari- ance matrix E is not proportional to the identity matrix, rather it is constructed from a water-filling algorithm. If one compares the capacity achieved assuming equal transmit- power and unknown channel with that of perfect channel estimation through feedback then the capacity-gain due to use of feedback is obtained. New Radio Access Physical Layer Aspects (Part 2) 555 For the independent identically distributed Rayleigh fading scenario, the linear capacity growth discussed earlier will be observed. It is shown that MIMO channel capacity can be written as CMIMO = min(NTx,NRX) where li, i = 1, 2, min(NTx,NRX) are the non-zero eigenvalues of HHH. We can decompose the MIMO channel into K < min(NTx,NRX) equivalent parallel SISO channels using the singular value decomposi- tion (SVD) theorem. Let y = H
x + n describe the input-output relationship of the MIMO channel where y is the output vector with NRX components, X is the input vector with NTX components, n is the additive noise vector with NRX components, and H is the NRX NTX channel matrix. Using the SVD theorem, we can show H = UEVH Let x = VHx,y=UHy, and n = UHn denote the unitary transformation of the channel input and output and noise vectors, it can be shown that G=EX+n Since U and V are unitary matrices and =diag(A1,A2, ^min(NRx,NTX), 0,0,...,0), it is clear that the capacity of this model is the same as the capacity of the model Hx n. However, is a diagonal matrix with K non-zero elements on the main diagonal, YK + , +1 . The latter equations are conceptually equivalent to K parallel SISO eigenchannels, each with signal power of X},i=1,2, min(NTx,NRX). As a result, the MIMO channel capacity can be rewritten in terms of the eigenvalues of the input signal covariance matrix E. When the channel knowledge is available at the transmitter and receiver then H is known, and we can optimize the capacity over E subject to the power constraint tr(E) VI Y. It is shown in the literature that the optimal E in this case exists and is known as water-filling solution. The channel capacity in this case is given by where <0 and n is a nonlinear function of eigenvalues of the channel input covariance matrix. The effect of various channel conditions on the channel capacity has been extensively studied in the literature. For example, increasing the LoS sig- nal strength at fixed SNR reduces capacity in Rician channels [22,24]. This can be explained in terms of the channel matrix rank or through various eigenvalue properties. The issue of correlated fading is of considerable importance for implementations where the The concept of decomposition of an NXN Hermitian matrix in terms of quadratic product of NXN unitary matrix composed of eigenvectors and NXN diagonal matrix of eigenvalues can be generalized to MXN complex-valued matrices of rank K. If A is a M X Nwhere(M > N) c
omplex-valued matrix of rank K then A = UEVH denotes the singular value decomposition of A. The M X M unitary matrix U is composed of the eigenvectors of AAH, that is, U = (U1,U2,...,Um), where AAHu = o2ui. The NXN unitary matrix V is com- posed of eigenvectors of AHA that is, =(v1,V2,...,Vn), where AHvi = o2vi The elements of the MXN are the square roots of the eigenvalues of matrix AHAo as the singular values of matrix A may be written as A = values of matrix A is the rank of A. The singular values of matrix A are positive real numbers which satisfy . 556 Chapter 4 antennas are required to be closely spaced. The optimal water-filling allocation strategy is obtained when the power allocated to each spatial subchannel is non-negative. In the design of wireless communication systems, the main objective is to exploit the trans- mission schemes whose performance can approach the channel capacity as much as possi- ble. Therefore, it is important to understand the underlying concepts and various information theoretic definitions of channel capacity and what can be pragmatically achieved under realistic channel conditions and transceiver implementations. Let us begin our concise study with the most generic definition of channel capacity. We denote the input and output of a memoryless SISO wireless channel with the random variables X and Y, respectively, the channel capacity is defined as C = max p(x) I(X; Y) where I(X; Y) represents the mutual information between X and Y. Shannon's theorem [19] provides an operational meaning to the definition of the instantaneous capacity as the number of bits that can be transmitted reliably over the channel with vanishing probability of error. The mutual infor- mation is maximized with respect to all possible transmit signal statistical distributions p(x). Mutual information is a measure of the amount of information that one random variable contains about another variable. The mutual information between X and Y can also be writ- ten as I(X;) = H(Y) - H(Y|X) where H(Y|X) represents t
he conditional entropy between the random variables X and Y. The entropy of a random variable can be described as the measure of uncertainty in the random variable or the amount of information required on the average to describe the random variable. Thus the mutual information representation of channel capacity can be described as the reduction in the uncertainty of one random vari- able due to the knowledge of the other. Note that the mutual information between X and Y depends on the properties of the channel through a channel matrix H and the properties of X through the probability distribution of X [22]. Throughout this section, it is assumed that the channel matrix H is random and that the receiver has perfect channel knowledge. It is also assumed that the channel is memoryless, that is, for each use of the channel an independent realization of H is drawn. This means that the capacity can be computed as the maximum of the mutual information as defined earlier. The results are also valid when H is generated by an ergodic process because as long as the receiver observes the H process, only the first order statistics are needed to determine the channel capacity. The ergodic (mean) capacity of a random channel with NRX = NTX = 1 and an average transmit power constraint PT can be expressed as Cergodic EH where P is the aver- age power of a single codeword, transmitted over the channel, and EH{.} denotes the expecta- tion over all channel realizations. Compared to the generic definition, the capacity of the channel is now defined as the maximum of the mutual information between the input and the output over all statistical distributions on the input that satisfy the power constraint. In general, New Radio Access Physical Layer Aspects (Part 2) 557 the capacity of a random MIMO channel with power constraint PT can be expressed as where is the covariance matrix of the =E[xx"] transmit signal vector X. The total transmit power is limited to PT irrespective of the number of transmit antennas. For a fading channel the c
hannel matrix H is a stochastic process; thus the associated channel capacity C(H) is a random variable. In this case, the ergodic channel capacity is defined as the average of instantaneous channel capacity over the distribution of H. The ergo- dic channel capacity of the MIMO transmission scheme is given by Cergodic where denotes the NTX X NTX covariance matrix of the channel input vector X. According to information theoretic concepts, this capacity cannot be achieved unless channel coding is employed across an infinite number of independently fading blocks. Let us focus on the case of perfect CSI at the receiver side and no CSI at the transmitter side, which implies that the maxi- mization of latter equation is now more restricted than in the previous case. Nevertheless, it has been shown in the literature that the optimal signal covariance matrix must be chosen according = I. This means that the antennas should transmit uncorrelated streams with the same average power. With this result, the ergodic MIMO channel capacity reduces to [15] E{log2(det[Iy/NTHH4])} It is obvious that this is not the Shannon capacity in a true sense, since as mentioned earlier one with perfect channel knowledge at the transmitter can choose a signal covariance matrix that outperforms = I case. Nevertheless, we refer to the preceding expression as the ergo- dic channel capacity with CSI at the receiver and no CSI at the transmitter. The capacity under channel ergodicity is defined as the average of the maximum value of the mutual information between the transmitted and the received signals, where the maximi- zation is carried out with respect to all possible transmit signal statistical distributions. Another measure of channel capacity that is frequently used is outage capacity. With outage capacity, the channel capacity is associated to an outage probability. Capacity is treated as a random variable which depends on the channel instantaneous response and remains constant during the transmission of a finite-length coded block of infor
mation. If the channel capac- ity falls below the outage capacity, there is no possibility that the transmitted block of infor- mation can be decoded with no errors, no matter which coding scheme is employed. The probability that the capacity is less than the outage capacity denoted by Coutage is p. This can be expressed in mathematical terms by p(C < Coutage) = P. In this case the latter expres- sion represents an upper bound since there is a finite probability p that the channel capacity is less than the outage capacity. It can also be written as a lower bound, representing the case where there is a finite probability (1 p) that the channel capacity is higher than 558 Chapter 4 Coutage, which means >Coutage) = 1 - P. In other words, since the MIMO instantaneous channel capacity is a random variable, it is meaningful to consider its statistical distribution, thus a useful measure of its statistical behavior is the outage capacity. Outage analysis quan- tifies the level of performance (in this case capacity) that is guaranteed with a certain level of reliability. The p% outage capacity Coutage(p) is defined as the information rate that is guaranteed for (100 - p)% of the channel realizations (p(C < Coutage) = p). The outage capacity is often a more relevant measure than the ergodic channel capacity, because it describes in some way the quality of the channel. This is due to the fact that the outage capacity measures the probabilistic distribution of the instantaneous rate supported by the channel. Thus, if the rate supported by the channel is spread over a wide range, the outage capacity for a fixed probability level may become small, whereas the ergodic channel capacity may be high [15]. 4.1.9.2 Single-User and Multi-user MIMO Single-user MIMO (SU-MIMO) techniques are point-to-point schemes that improve chan- nel capacity and reliability through the use of space-time/space-frequency codes (trans- mit/receive diversity) in conjunction with spatial multiplexing schemes. In an SU-MIMO transmission, the advantage of
MIMO processing is obtained from the coordination of processing among all the transmitters or receivers. In the multi-user channel, on the other hand, it is usually assumed that there is no coordination among the users. As a result of the lack of coordination among users, uplink and downlink multi-user MIMO channels are different. In the uplink scenario, users transmit to the base station over the same chan- nel. The challenge for the base station is to separate the signals transmitted by the users, using array processing or multi-user detection methods. Since the users are not able to coordinate with each other, there is not much that can be done to optimize the transmitted signals with respect to each other. If some channel feedback is allowed from the transmit- ter back to the users, some coordination may be possible, but it may require that each user know all the other users' channels rather than only its own. Otherwise, the challenge in the uplink is mainly in the processing done by the base station to separate the users. In the downlink channel, where the base station simultaneously transmits to a group of users over the same channel, there is some inter-user interference for each user which is gener- ated by the signals transmitted to other users. Using multi-user detection techniques, it may be possible for a given user to overcome the multiple access interference, but such techniques are often extremely complicated for use at the receivers. Ideally, one would like to mitigate the interference at the transmitter by carefully designing the transmit sig- nal. If CSI is available at the transmitter, it is aware of what interference is caused for each user by the signal it is transmitted to other users. The inter-user interference can be mitigated by beamforming or the use of dirty paper codes. In general, single-user and multi-user MIMO schemes are compared as follows [15]: New Radio Access Physical Layer Aspects (Part 559 SU-MIMO is a point-to-point link with predictable link capacity, whereas MU-MIMO chan
nel is a broadcast channel (BC) in the downlink direction and a multiple access channel (MAC) in the uplink direction whose link-level data rates are characterized in terms of capacity regions. Multi-layer SU-MIMO schemes offer layer/stream diversity in the sense that if one stream has a poor SNR, the system will not necessarily experience an outage, whereas in the same situation, a MU-MIMO system will be in outage. This is because in MU-MIMO schemes, users have typically an equal target data rate and symbol error rate on their respective links, while in SU-MIMO systems, only the sum rate of the overall link is considered since all streams are delivered to same user. The MU-MIMO schemes suffer from near-far problem due to significant difference between the path losses experienced by each user, resulting in large deviation in the SINR of the corresponding user links. This would benefit the users with better channel conditions, while there is no near-far problem in SU-MIMO systems. The near-far prob- lem in MU-MIMO systems may be alleviated via appropriate grouping of the users with similar channel conditions. The use of cooperative collocated transmit antennas in SU-MIMO schemes can facili- tate the encoding at the transmitter and decoding at the receiver. In contrast, the users in a MU-MIMO scheme can cooperate in encoding at the base station in the downlink and decoding in the uplink; however, the users cannot cooperate in decoding in the downlink or encoding in the uplink directions. The capacity of the downlink and uplink is theoretically identical in the SU-MIMO sys- tems (given the same transmit power and the perfect channel knowledge in the transmit- ter and the receiver); however, the capacities of the MU-MIMO BC and MAC are not identical. The capacity of the SU-MIMO schemes is less impacted by lack of CSI at the transmit- ter, whereas the capacity of the MU-MIMO BC significantly suffers from lack of CSI at the transmitter. SU-MIMO suffers from limited exploitation of multi-user diversity. The number of sp
a- tial dimensions is limited by number of antennas at the UE. There is a potential that spatial dimensions are wasted, if the UEs have a smaller number of antennas compared to the base station. MU-MIMO more efficiently exploits the multi-user diversity since all spatial dimen- sions which are supported by the base station can be exploited. It will achieve capacity gain, if UEs have a smaller number of antennas relative to the base station. Stronger spatial dimensions are exploited, particularly in the case of low-rank channel. The uti- lized spatial dimensions may be weak in the case of low-rank channel due to spatial correlation. The advantages/disadvantages of SU-MIMO and MU-MIMO schemes are summarized in Table 4.21. 560 Chapter 4 Table 4.21: Comparison of SU-MIMO and MU-MIMO schemes. SU-MIMO MU-MIMO Advantages High user throughput High system capacity High peak data rates Full exploitation of multiuser diversity Disadvantages Multiple transmit antennas at the base Degradation of peak data rates station are not fully exploited due to interuser interference Multiuser diversity is not fully exploited Single-user maximum rate CSU = C24 MAC maximum sum rate region Single-user maximum rate Figure 4.60 Capacity region of MU-MIMO BC with two users compared to SU-MIMO. An important metric for measuring the performance of any communication channel is the information theoretic capacity. In an SU-MIMO channel, the capacity is the maximum amount of information that can be transmitted as a function of available bandwidth given a constraint on transmitted power. In SU-MIMO channels, it is common to assume that the total power distributed among all transmit antennas is limited. For the multi-user MIMO channel, the problem is somewhat more complex. Given a constraint on the total transmit power, it is possible to allocate varying fractions of that power to different users in the net- work; thus for any value of total power, different information rates are obtained. The result is a capacity region shown in Fig. 4.60 for two-us
er MU-MIMO channel. The maximum capacity for user 1 is achieved when 100% of the power is allocated to user 1, and for user 2, the maximum capacity is also obtained when it is allocated the full power. For every pos- sible power distribution, there is an achievable information rate, which results in the New Radio Access Physical Layer Aspects (Part 2) 561 capacity regions depicted in the figure. Two regions are shown in Fig. 4.60, the larger one for the case where both users have roughly the same maximum capacity (similar channel conditions), and the other region for a case where one of the users has much better channel condition than the other user. For Nuser users, the capacity region is characterized by an Nuser-dimensional hyper-region. Let us use a simple MU-MIMO system model to demonstrate how the sum rate of the sys- tem is calculated. As shown in Fig. 4.61, the transmit vector X can be expressed as the weighted sum (precoded) of the input data symbols Sklk=1,2,...,Nusser as follows X = = Ph where Nuser X1 vectors2denotes the data symbols from Nuser users and P = (P1,P2,. PNuser) is the precoding matrix comprising Nuser precod- ing vectors. It is assumed that the finite transmit power at the transmitter can be calculated as follows PTX = =E{x"x}. The kth complex-valued output of the system can be written as Yk HkX + nk ECN where N denotes the dimension of vector y. The kth branch user data can be detected using a linear minimum mean squared error (MMSE) receiver as follows Sk = WTXKEC in which the MMSE weighting matrix is given by It can be further shown that the SINR at the kth output is given by w/H&Pkl2 The sum rate of the system is given as In the uplink of a multi-user MIMO system, the received signal at the gNB can be written as = where Xk is the NTX X 1 transmitted signal vector of the kth UE with NTX transmit antennas, denotes the flat-fading channel matrix from the kth user to the gNB and n = (n1,N2,..., Nk N(0,1) is an independent and identically distributed additive white Gaussian noise vector a
t the gNB. We assume that the receiver k has perfect and instantaneous knowledge of the channel matrix Hk. Note that the gNB is equipped with NTX transmit and NRX receive antennas. In the downlink, the received signal at the kth receiver can be written as yk=Hk+nkk=1,2, Nuser where is the downlink channel, and is the complex-valued additive Gaussian noise at the kth receiver. We assume that each receiver also has perfect and instantaneous knowledge of its own channel matrix Hk. The transmitted signal X is a function of the multiple users' information data, that is, X = New where Xk is the signal carrying kth user's message with covariance matrix Lk = E{X&XH}. The power allocated to the kth user is given by Pk = tr{Sk}. Under a Channel 1 User RX 1 w7 User1 data User 1 data Channel 2 User RX 2 User 2 data Linear User2 data Scheduler precoding Nuser S Nuser User Nuser data Channel NuserHN User RX Nuser WN User Nuser data Finite-Rate Feedback Channels Figure 4.61 MU-MIMO BC model [ 15]. New Radio Access Physical Layer Aspects (Part 2) 563 Maximum sum capacity Capacity region Near-far capacity region Capacity of user 1 Figure 4.62 An example illustration of capacity region. sum power constraint at the gNB, the power allocation needs to maintain Assuming a unit variance for the noise, it can be shown that the capacity region for a given matrix channel realization can be written as [15] CDL = where R +Nuser is the Nuser-dimensional set of positive real numbers. The preceding equation may be optimized over each possible user ordering. Although difficult to realize in practice, the computation of the capacity region can be simplified using the assumption that the downlink capacity region can be calculated through the union of regions of the dual MAC with all uplink power allocation vectors meeting the sum power constraint. The fundamental effect of the use of multiple antennas at either the gNB or the user terminals in increasing the channel capacity is best realized by examining how the sum capacity, that is, the point o
btained by the maximum NISE Rk in the capacity region, scales with the number of active users (see Fig. 4.62). An efficient UE pairing scheme is required at the gNB to choose the correct pair of UEs for transmission in MU-MIMO systems. This pairing scheme is required to maintain minimal interference among scheduled UEs in MU-MIMO transmission. A proper pairing scheme can be designed by maximizing the chordal distance¹5 between the feedback precoding The asymptotic performance of a coding scheme is dominated by the shortest distance between any pair of codewords. The relevant distance measure between two codewords xjandx2 of an orthogonal code for a non- coherent MIMO system is the chordal distance defined as d2(x1,X2) = - x 564 Chapter 4 matrices of the UEs. The chordal distance between two matrices is given in [15] and repre- sented by where II.I.F denotes the Frobenius norm of the matrix. The chordal distance generalizes the distance between two points on the unit sphere through an isometric embedding from complex Grassmann manifold Gr(Nrx,N1) to the unit sphere. Assuming an infinite number of UEs served by the current gNB, the kth UE with reported precoding matrix Pk will be paired with the mth UE, where the mth UE reports pre- coding matrix Pm and the chordal distance between precoding matrices is maximized. With the maximized chordal distance criterion, Pm stays in the anti-polar position of Pk; hence, is minimized, yielding the minimized inter-user interference. Therefore, the UE pairing scheme for MU-MIMO transmission in practical systems is designed to find the best match between two UEs (e.g., the mth UE and the kth UE) based on the reported precoding matrices and the following criterion Pm = arg de(p), Pk) with PUE representing the pool containing all reported precoding matrices at a certain gNB. 4.1.9.3 Analog, Digital, and Hybrid Beamforming Large antenna arrays and beamforming play an important role in 5G implementations since both base stations and devices can accommodate a larger number of antenna
elements at mmave frequencies. Aside from a higher directional gain, these antenna types offer com- plex beamforming capabilities. This allows increasing the capacity of cellular networks by improving the signal-to-interference ratio through direct targeting of user groups. The nar- row transmit-beams simultaneously lower the amount of interference in the radio propaga- tion environment and make it possible to maintain sufficient signal power at the receiver terminal at larger distances in rural areas. An important prerequisite for any beamforming architecture is a phase coherent signal, which means that there is a defined and stable phase relationship between all RF carriers. A fixed phase offset between the carriers can be used to steer the main antenna lobe to a desired direction. A major difference between NR and LTE is the support for beamformed control channels, which resulted in different reference signal design for each control channel. The NR physical channels and signals, including those used for control and synchronization, have all been designed to support beamforming. The CSI for operation with a large number of antennas can be obtained via CSI reports from the devices based on transmission of CSI-RSs in the downlink, as well as using uplink measurements exploiting channel reciprocity. 4.1.9.3.1 Analog Beamforming Analog beamforming typically relies on conditioning the amplitude and phase of the sig- nals that feed the antenna array. The combination of these two factors is used to improve sidelobe suppression or steering nulls. Phase and amplitude for each antenna element are combined by applying a complex-valued weighting factor to the signal that is fed to the New Radio Access Physical Layer Aspects (Part 2) 565 2-Antenna linear array 4-Antenna linear array Analog beam steering by phase shifting 8-Antenna linear array 16-Antenna linear array (-30°, 0°, +30° phase shift and 8 antennas) Digital stream baseband RF processing processing Phase shifter + PA Figure 4.63 Analog beamforming transmitter arch
itecture. corresponding antenna. Fig. 4.63 shows a basic implementation of an analog beamforming transmitter architecture. This architecture consists of only one RF chain and multiple phase shifters that feed an antenna array. The phased arrays have been used in practical systems (e.g., radar systems) for the past several decades. Beam steering was often carried out with a selective RF switch and fixed phase shifters. This concept is still used in mod- ern communication systems using advanced hardware and improved precoding techniques. These enhancements enable separate control of the phase of each element. Unlike tradi- tional passive architectures, the beam can be steered not only to discrete but virtually any angle using active beamforming antennas. Analog beamforming is performed in the ana- log domain at RF frequencies or an intermediate frequency. However, implementing multi-stream transmission with analog beamforming is a highly complex task. In order to calculate the phase shifts, a uniformly spaced linear array with element spacing is assumed. Considering the receive scenario, the antenna array must be in the far field of the incoming signal, SO that the arriving wave front is approximately planar. If the signal arrives at an angle O relative to the antenna boresight, the wave must travel an additional distance d sin 0 to arrive at each successive element. This translates to an element specific delay which can be converted to a frequency-dependent phase shift of the signal as = 2nd(sin A) 1. The frequency dependency translates into an effect called beam unevenness. The main lobe of an antenna array at a defined frequency can be steered to a certain angle using phase offsets calculated by the latter equation (see Fig. 4.63). If the 566 Chapter 4 antenna elements are now fed with a signal of a different frequency, the main lobe will swerve by a certain angle. Since the phase relations were calculated with a certain carrier frequency in mind, the actual angle of the main lobe shifts according to the current
fre- quency. Radar applications with large bandwidths in particular suffer inaccuracies due to this effect. The latter equation can be expressed in time domain using time delays instead of frequency offsets T = d/c sin 0. This means that the frequency dependency is elimi- nated, if the setup is fitted with delay lines instead of phase shifters. The performance of the analog architecture can be further improved by additionally changing the magnitude of the signals feeding the antennas [56]. Analog signal processing typically implies that beamforming is carried out on a per-carrier basis. For the downlink transmission, this implies that it is not possible to frequency- multiplex beamformed transmissions to devices located in different directions relative to the base station. In other words, beamformed transmissions to different devices located in dif- ferent directions must be separated in time. 4.1.9.3.2 Digital Beamforming While analog beamforming is generally restricted to one RF chain even when using large number of antenna arrays, digital beamforming in theory supports as many RF chains as there are antenna elements. If suitable precoding is performed in the digital-domain base- band, this would yield higher flexibility regarding the transmission and reception. The additional degree of freedom can be leveraged to perform advanced techniques such as multi-beam MIMO. These advantages result in the highest theoretical performance possible compared to other beamforming architectures. Fig. 4.64 illustrates the high-level digital beamforming transmitter architecture with multiple RF chains. Uneven beam is a problem Data stream 1 RF chain Data stream 2 Digital RF chain baseband processing precoding Data stream N RF chain Figure 4.64 Digital beamforming transmitter architecture. New Radio Access Physical Layer Aspects (Part 2) 567 of analog beamforming architectures using phase shifters. This is a drawback considering 5G plans to make use of large bandwidths in the mm Wave bands. Digital control of the RF chain enabl
es optimization of the phases according to the frequency over a large band. Nonetheless, digital beamforming may not always be ideally suited for practical implemen- tations of 5G applications. The very high complexity and requirements regarding the hard- ware may significantly increase cost, energy consumption, and complicate integration in mobile devices. Digital beamforming is better suited for use in base stations, since perfor- mance outweighs mobility in this case. Digital beamforming can accommodate multi-stream transmission and serve multiple users simultaneously, which is a key driver of the technol- ogy [56]. Multiple antennas at the transmitter and receiver can be used to achieve array and diversity gain instead of capacity gain. In this case, the same symbol weighted by a complex-valued scale factor is sent from each transmit antenna, SO that the input covariance matrix has unit rank. This scheme is referred to as beamforming. It must be noted that there are two con- ceptually and practically different classes of beamforming: (1) direction-of-arrival beam- forming (i.e., adjustment of transmit or receive antenna directivity) and (2) eigen- beamforming (i.e., a mathematical approach to maximize signal power at the receive antenna based on certain criterion). In this section, we only consider eigen-beamforming schemes. A classic eigen-beamforming scheme usually performs linear, single-layer, complex- valued weighting on the transmit symbols such that the same signal is transmitted from each transmit antenna using appropriate weighting factors. In this scheme, the objective is to maximize the signal power at the receiver output. When the receiver has multiple antennas, the single-layer beamforming cannot simultaneously maximize the signal power at the receive antennas; hence, precoding is used for multi-layer beamforming in order to maximize the throughput of a multi-antenna system. Precoding is a generalized beam- forming scheme to support multi-layer transmission in a MIMO system. Using precoding, mult
iple streams are transmitted from the transmit antennas with independent and appro- priate weighting per each antenna such that the throughput is maximized at the receiver output. Let us begin our concise study of eigen-beamforming and MIMO precoding using a simpli- fied model, where we have two transmit antennas at the base station and a UE with a single receive antenna. The goal is to find the complex-valued precoding weights such that the SNR at the receiver is maximized. The channel in this example is a vector h = (h1, h2) where h1 and h2 are channel coefficients. It can be shown that the complex-valued weight- ing factors p1 and P2 can be calculated as shown in Fig. 4.65. This example illustrates the concept of digital precoding and how in theory the weighting vectors/matrices are calculated to maximize the SNR at the receiver. 568 Chapter 4 Figure 4.65 Concept of the digital precoding. In an SU-MIMO system, the identity matrix precoding (for open-loop) and SVD precoding (for closed-loop) can be used to achieve link-level MIMO channel capacity. In addition, random unitary precoding can achieve the open-loop MIMO channel capacity with no sig- naling overhead in the uplink. The SVD precoding, on the other hand, has been shown to achieve the MIMO channel capacity when CSI is known at the transmitter. In a precoded SU-MIMO system with NTX transmit antennas and NRX receive antennas, the input-output relationship can be described as y = HWs + n where S = is an MX1 vector of normalized complex-valued modulated symbols, and = (n1,n2,..., nnrx) are the NRX X 1 vectors of received signal and noise, respectively, H is the NRX NTX complex-valued channel matrix, and W is the NTX X M linear precoding matrix. In the receiver, a hard-decoded symbol vector S is obtained by decoding the received vector y using a vector decoder, assuming perfect knowledge of the channel and the optimal selec- tion of precoding matrices. We assume that the entries of H are independent and identically distributed according to C(0, 1) (complex-va
lued normal distribution) and the entries of noise vector n are independent and identically distributed according to C(0,No) The input vector S is assumed to be normalized, thus E[ssH] = I where I is an identity matrix. Let us further assume that precoding matrix W is unitary, thus WWH = I. The receiver selects a precoding matrix W, = 1,2,... Ncodebook from a finite set of quantized precoding matrices N = {W1, W2, W N codebook ) and sends the index of the chosen precoding matrix back to the transmitter over a low-delay feedback channel. There are two important issues concerning the above precoding scheme: (1) optimal selection criterion for choosing a precoding matrix from set N and (2) design of codebook L. The matrix W, i = 1, 2, ., Ncodebook can be selected from N by using either of the following optimization criteria [75]: (1) minimizing the trace of the mean squared error (MMSE-trace selection), (2) minimizing the determinant of the mean squared error (MMSE-determinant selection), (3) maximizing the minimum sin- gular value of HW (singular value selection), (4) maximizing the instantaneous capacity (capacity selection), or (5) maximizing the minimum received symbol vector distance (mini- mum distance selection). The above selection criteria may be evaluated at the receiver using New Radio Access Physical Layer Aspects (Part 2) 569 a full search over all matrices in L. Using distortion functions based on the selection crite- ria, it can be shown that the codebook N is designed using Grassmannian subspace packing [75]. If MMSE-trace, singular value, or minimum distance selection is used, the codebook is designed such that E =minwwWWH-WjW216is = maximized. If MMSE- determinant or capacity selection optimization method is used, the codebook is designed such that E = is maximized. The precoding matrices (MIMO codebooks) are designed based on a trade-off between per- formance and complexity. The following are some desirable properties of the codebooks: 1. Low-complexity codebooks can be designed by choosing the el
ements of each constitu- ent matrix or vector from a small binary set, for example, a four alphabet ( +1 1, + j) set, which eliminates the need for matrix or vector multiplication. In addition, nested property of the codebooks can further reduce the complexity of CQI calculation when rank adaptation is performed. Base station may perform rank overriding which results in significant CQI mismatch, if the codebook structure cannot adapt to it. A nested property with respect to rank over- riding can be exploited to mitigate the mismatch effects. Power amplifier balance is taken into consideration when designing codebooks with constant modulus property, which may eliminate unnecessary increase in PAPR. 4. Good performance for a wide range of propagation scenarios, for example, uncorrelated, correlated, and dual-polarized channels, is expected from the codebook design algo- rithms. A DFT-based codebook is optimal for linear array with small antenna spacing since the vectors match with the structure of the transmit array response. In addition, with an optimal selection of the matrices and the entries of the codebook (rotated block diagonal structure), significant gains can be obtained in dual-polarized scenarios. 5. Low feedback and signaling overhead are desirable from operation and performance perspective. 6. Low memory requirement is another design consideration for the MIMO codebooks. Let us consider multi-user MIMO systems and briefly study how precoding is applied in those scenarios. In the downlink direction of a precoded MU-MIMO system (alternatively known as BC in the literature) with NTX transmit antennas at the base station and one receive antenna at the kth mobile station the input-output relationship can be written as = ,Nuser, where = is the NTX X 1 vector of weighted transmitted symbols Si, yk and Nk are the received signal and noise, respectively, hk is the kth NTX X 1 channel vector, where matrix H = (h1,h2, h Nuser The Euclidean norm of square matrix A is defined as The spectral norm of matrix A is the
largest singular value of A or the square root of largest eigenvalue of the positive-semi-definite matrix AH A, that is, (AHA) where AH denotes the conjugate transpose of A [22]. 570 Chapter 4 complex-valued downlink channel matrix and Wk is the kth NTX X 1 normalized linear pre- coding vector. The mathematical relationship for the input and output of a precoded MU-MIMO system in the uplink (alternatively known as MAC in the literature) with NRX receive antennas at the base station and one transmit antenna at each user terminal can be written as y = NISE Skvkhk + n where SkVk is the weighted complex-valued modulated symbol from the kth user, y = and n=(m1, = are the NRX X 1 vectors of received signal and noise, respectively, hk is the kth NRX X 1 channel vector, where matrix H = (h1, h Nuser) is the NRX X Nuser complex-valued uplink channel matrix. As men- tioned earlier, perfect knowledge of the CSI is necessary at the transmitter in order to achieve the capacity of a multi-user MIMO channel. However, in practical systems, the receiver only provides partial CSI through uplink feedback channels to the transmitter, that is, the multi-user MIMO precoding with limited feedback. The received signal in the down- link of a MU-MIMO system with limited feedback precoding is mathematically expressed as yk = Nk, = 1, 2, Nuser. The transmit vector for limited feedback pre- coding is modeled as Wi = Wi where Ei is the error vector generated as a result of the limited feedback and vector quantization, the expression for the received signal can be rewritten as + Nk, k = 1, 2, Nuser where is the residual interference due to the limited-feedback precoding. To reduce the residual interference term, one should use more accurate CSI feedback which results in the use of more uplink resources for the feedback. It is shown in the literature that the number of feedback bits per user Nfeedback must be increased linearly with the SNR YdB (in decibels) at the rate of Nfeedback = (NTX 1)log2Y = VaB(NTX - 1)/3 in order to achieve the full
multiplexing gain of NTX antennas [76]. In addition, the scaling of Nfeedback guaran- tees that the throughput loss relative to zero-forcing (ZF) precoding with perfect CSI knowl- edge at the transmitter is upper bounded by NTX bps/Hz, which corresponds to approximately 3 dB power offset. The throughput of a feedback-based ZF system is bounded, if the SNR approaches infinity and the number of feedback bits per user is fixed. Reducing the number of feedback bits according to Nfeedback = clog2 Y for any a NTX - 1 results in a strictly inferior multiplexing gain of NTx[a/(NTX - 1)] where NTX is the number of transmit antennas and V is the SNR of the downlink channel. In order to calculate the amount of feedback required to maintain certain throughput, the difference between the feedback rates of ZF precoding with perfect feedback RPF-ZF(Y) and with limited feedback RLF-ZF(Y) is required to satisfy the following constraint AR(y) = Rpf-zf(Y) - RLF-ZF(Y) < log2b. In order to maintain a rate offset less than log2b (per user) between ZF with perfect CSI and with finite-rate feedback (i.e., AR(Y) VI log2b, , it is sufficient to scale the number of feedback bits per user according to Nfeedback - - 1)log2(b - 1). The rate offset of log2b (per user) is New Radio Access Physical Layer Aspects (Part 2) 571 translated into a power offset, which is a more useful metric from the design perspective. Since a multiplexing gain of NTX is achieved with ZF, the ZF curve has a slope of NTX bps/Hz/3 dB at asymptotically high SNR. Therefore, a rate offset of log2b bps/Hz per user corresponds to a power offset of 3 log2b decibels. To feedback Nfeedback bits through uplink channel, the throughput of the uplink feedback channel should be larger than or equal to Nfeedback, that is, WFBlog2(1 ^FB) Nfeedback where YFB denotes the SNR of the feedback channel. Thus the required feedback resource to satisfy the constraint AR(y) < log2b can be shown to be given as follows (NTX - 1)log2(b - log2(1+YFB), that is, the required feedback resource is a f
unction of both downlink and uplink channel conditions [76]. We defined digital precoding as adaptive or non-adaptive weighting of the spatial streams prior to transmission from each antenna port (in a multi-antenna configuration) using a pre- coding matrix for the purpose of improving the reception or separation of the spatial streams at the receiver. Both feed-back and feed-forward precoder matrix selection schemes can be used in order to select the optimal weights. Feed-back precoding matrix selection techniques do not rely on channel reciprocity, rather they use feedback channels provided that the feed- back latency is less than the channel coherence time. In feed-forward approaches, the neces- sary CSI can be theoretically obtained through direct feedback, where the CSI is explicitly signaled to the transmitter by the receiver or estimated using the SRS. The direct channel feedback methods preclude the channel reciprocity requirement, whereas channel sounding methods rely on channel reciprocity. Therefore, explicit control signaling is required for the PMI-based (feedback) schemes. However, in reciprocity-based schemes, the sounding sig- nals in the uplink and precoded pilots in the downlink are used to assist the transmitter and receiver to appropriately select the precoding matrix. The reciprocity-based schemes have the additional advantage of not being constrained to a finite set of codebooks. Beamforming relies on long-term statistics of the radio channel and, unlike reciprocity-based techniques, does not require short-term correlation between the uplink and downlink in order to prop- erly function. 4.1.9.3.3 Hybrid Beamforming Hybrid beamforming has been proposed as a possible solution that is able to combine the advantages of both analog and digital beamforming architectures. The idea of hybrid beam- forming is based on the concept of phased array antennas commonly used in radar applica- tions. Due to the reduced power consumption, it is also seen as a possible solution for mmWave mobile broadband comm
unication. If the phased array approach is combined with digital beamforming, the phased array approach might also be feasible for non-static or quasi-static scenarios. Considering the inefficiency of mmWave amplifiers and the high insertion loss of RF phase shifters, it is more desirable to perform the phase shifting in the baseband. The power consumption associated with both cases is comparable, as long as the 572 Chapter 4 Data stream 1 RF chain Data stream 2 Digital baseband processing precoding Data stream N RF chain Phase shifter + PA Figure 4.66 Hybrid beamforming architecture. number of antennas per RF-chain remains relatively small. A significant cost reduction can be achieved by reducing the number of complete RF chains. This does also lead to lower the overall power consumption. Since the number of data converters is significantly lower than the number of antennas, there are less degrees of freedom for digital baseband proces- sing. Thus the number of simultaneously supported streams is reduced compared to digital beamforming. The resulting performance gap is expected to be relatively low in mmWave bands, which this scheme is more suitable, due to the specific channel characteristics. The high-level block diagram of a hybrid beamforming transmitter is shown in Fig. 4.66. The precoding is divided between the analog and digital domains. In theory, it is possible to assume that each amplifier is interconnected to each radiating element. In recent years, hybrid beamforming with low-resolution data conversion (digital-to-analog/ analog-to-digital) has been studied including the energy efficiency/spectral efficiency trade- off of fully connected hybrid and digital beamforming with low-resolution data converters. One of the challenges of large antenna arrays is the increasing cost and complexity of the use of many analog-to-digital and digital-to-analog converters and other RF components to drive individual elements or subarrays. Thus the feasibility study of low-resolution and in New Radio Access Physical La
yer Aspects (Part 2) 573 the extreme case one-bit resolution data converters would be very important for practical implementation of massive MIMO systems [16]. In summary, there are three types of beamforming architectures used for antenna arrays [56]: Analog beamforming: The traditional way to form beams is to use attenuators and phase shifters as part of the analog RF circuit where a single data stream is divided into separate paths. The advantage of this method is that only one RF chain (PA, LNA, filters, switch/circulator) is required. The disadvantage is the loss from the cascaded phase shifters at high power. Digital beamforming: It assumes there is a separate RF chain for each antenna element. The beam is then formed by matrix-type operations in the baseband where the ampli- tude and phase weighting are applied. For frequencies lower than 6 GHz, this is the pre- ferred method since the RF chain components are comparatively inexpensive and can combine MIMO and beamforming into a single array. For frequencies of 28 GHz and above, the PAs and ADC/DACs are very lossy for standard CMOS components. Gallium arsenide and gallium nitrate can be used in high frequencies to decrease losses at the expense of higher cost. Hybrid beamforming: It combines digital and analog beamforming in order to allow the flexibility of MIMO and beamforming while reducing the cost and losses of the beam- forming unit. Each data stream has its own separate analog beamforming unit with a set of Nantennas antennas. If there are Nstreams data streams, then there are Nstreams Nantennas antennas. The analog beamforming unit loss due to phase shifters can be mitigated by replacing the adaptive phase shifters with a selective beamformer such as a Butler matrix.21 Some architectures use the digital beamforming unit to steer the direction of the main beam while the analog beamforming unit steers the beam within the digital envelop. 4.1.9.4 Full-Dimension MIMO Beamforming is a signal processing method that generates directional antenna beam patte
rns using multiple antennas at the transmitter. It is possible to steer the transmitted signal toward a desired direction and, at the same time, avoid receiving the unwanted signal from an undesired direction. Traditional beamforming schemes controlled the beam pattern only in the horizontal (azimuth) plane. The three-dimensional beamforming adapts the radiation beam pattern in both elevation and azimuth planes to provide more degrees of freedom in supporting users. Higher average user throughput, less inter-cell and inter-sector interfer- ence, higher energy efficiency, improved coverage, and increased spectral efficiency are some of the advantages of 3D beamforming or full-dimensional MIMO. In order to exploit the vertical dimension, the antenna tilt can be considered in the vertical axis. The antenna tilt angle is defined as the angle between the horizontal plane and the boresight direction of the antenna pattern. Mechanical alignment of the antenna was traditionally used to adjust 574 Chapter 4 Elevation beamforming Antenna Antenna Antenna boresight boresight boresight Azimuth beamforming Electrical tilt Mechanical tilt Vertical beamforming FD-MIMO Figure 4.67 Illustration of mechanical/electrical tilting, vertical, and 3D beamforming [23]. the tilt angle of the antenna along the vertical axis. As depicted in Fig. 4.67, some adjust- able brackets were used to mechanically change the tilting angle of the antenna. It is possible to control the tilt angle electrically by applying an overall phase shift to all antenna elements in the array. An active antenna system (AAS) is a recent technology that allows more individual control on antenna elements, where each array element (or group of elements) is integrated with a separate RF transceiver unit that provides remote control to the elements. By employing AAS at the base station, the vertical radiation pattern can also be adjusted dynamically in each sector, and multiple elevation beams can be generated to support multiple users or cover multiple regions; thus full
-dimension MIMO (FD-MIMO) is a combination of azimuth and elevation beamforming. Depending on the way that the antenna down tilt is changed, 3D beamforming can be classified into static and dynamic schemes. The static 3D beamforming refers to a system where the antenna tilt at the base station is set to a fixed value according to some statistical metrics, for example, the mean value of the vertical angles of users. This method cannot be adapted to the dynamic patterns of users' movements, that is, once the tilt angle is selected, it will remain unchanged. In contrast the dynamic 3D beamforming is a technique that steers the base station antenna tilt angle according to specific user locations. As mentioned earlier, the antennas at the base station are usually configured as a linear array of a limited number of antennas in the azi- muth plane. However, these geometries can shape the radiation pattern only in the horizon- tal plane; hence, to change the beam in the elevation plane for 3D beamforming, more general 2D or 3D array topologies are necessary. Those arrays are active antenna systems that are spaced in both azimuth and vertical planes with different configurations such as pla- nar, circular, spherical, or cylindrical structures. In addition, the array may include co- polarized or cross-polarized antenna elements [23]. In general, adding more antenna elements to the array provides more flexibility in beam steering designs and increases the number of radiation beams of the array. For vertical sec- torization in which the number of vertical sectors is usually small (e.g., two or three), only a small number of antennas are required in the vertical plane. However, in 3D beamforming New Radio Access Physical Layer Aspects (Part 2) 575 with per-user beam pattern adaptation (i.e., user tracking), a large number of antennas are needed. One of the challenges of the 3D beamforming is physical constraints and placement of a large number of antennas at the base station (see Fig. 4.68). This problem may be alle- viated i
n higher frequencies that are used for 5G networks [45]. The study of elevation beamforming and FD-MIMO began in 3GPP LTE Rel-12. In an FD-MIMO system, a base station with two-dimensional active antenna array supports multi- user joint elevation and azimuth or 3D beamforming, which results in much higher cell capac- ity compared to conventional systems. In an FD-MIMO architecture using 2D AASs with Ncolumn Nrow physical antennas, the precoding of a data stream is performed in two stages: (1) antenna-port virtualization that is a stream on an antenna port is precoded on NTXRU trans- ceiver units; and (2) transceiver-unit virtualization where a signal is precoded on Nantenna antenna elements. It is noted that in traditional transceiver architecture modeling, a fixed one-- to-one mapping is assumed between antenna ports and transceiver units, and TXRU virtuali- zation effect is combined into a fixed antenna pattern which captures the effects of both TXRU virtualization and antenna element pattern. Antenna-port virtualization is an operation in the digital domain, and it refers to digital precoding that can be performed in frequency- selective manner. An antenna port is typically defined in conjunction with a reference signal. For example, for precoded data transmission on an antenna port, a DM-RS is transmitted on the same bandwidth as the data and both are precoded with the same digital precoder. For CSI estimation, on the other hand, CSI-RSs are transmitted on multiple antenna ports. For CSI-RS transmissions, the precoder characterizing the mapping between CSI-RS ports to TXRUs can be designed as an identity matrix, to facilitate device's estimation of TXRU vir- tualization precoding matrix for data precoding vectors. The TXRU virtualization is an analog operation; thus it refers to time-domain analog precoding. The TXRU virtualization can be made time-adaptive. When TXRU virtualization is semi-static (or rate of change in TXRU virtualization is slow), the TXRU virtualization weights of a serving cell can be chose
n to provide good coverage to its serving mobiles and to reduce interference to other cells. There will be more challenges, if TXRU virtualization is dynamic in terms of hardware implementa- tion and protocol design [46]. In 1D TXRU virtualization, NTXRU TXRUs are associated with Ncolumn antennas comprising a column antenna array with the same polarization. In 2D antenna arrays with dual-polarized configuration, that is, P = 2, and the addition of Nrow rows, the total number of TXRUs would be Q = NTXRUNrowP. In 2D TXRU virtualization, Q TXRUs can be associated with any of NcolumnNrowP antenna elements. These two different TXRU architectures have different trade- offs in terms of hardware complexity, power efficiency, cost and performance. For each method, subarray partition and full-connection architectures are considered. In subarray parti- tion the antenna elements are partitioned into multiple groups with the same number of ele- ments. In 1D subarray partition, Ncolumn antenna elements comprising a column are partitioned into groups of K elements. In 2D subarray partition the total number of antenna elements Chapter 4 Conventional beamforming or FD-MIMO for a single UE in FD-MIMO for multiple UEs in MIMO in horizontal direction horizontal/vertical direction horizontal/vertical directions Antenna boresight network Cross-polarized 2D Radome antenna array Elevation pattern null direction Azimuth pattern null direction Elevation beamforming pattern Azimuth beamforming pattern RF signals -10 (dB) N Columns FD-MIMO enabled base station 2D antenna array with cross- Example radiation patterns in azimuth and elevation planes polarized antenna elements Column k+1 Column k Row 0 User1 Precoding hix++n User K Class A feedback Transceiver array to Codebook Index antenna mapping Row N-1 2D antenna array Column k+1 Column k Row 0 User1 Precoding hV,x++ User K Class B feedback Transceiver array to Beam index antenna mapping Row N-1 2D antenna array Figure 4.68 Horizontal, vertical, and 3D beamforming FD-MIMO systems: concept
of FD-MIMO systems; practical 2D array antenna configuration; vertical and horizontal beamforming patterns; array partitioning architecture with the conventional CSI-RS transmission; and array connected architecture with beamformed CSI-RS transmission 46]. New Radio Access Physical Layer Aspects (Part 2) 577 NcolumnNrowP is partitioned into rectangular arrays of K1 X K2 elements. On the other hand, in 1D full-connection, the output signal of each TXRU associated with a column antenna array with a same polarization is split into Ncolumn signals, and those signals are precoded by a group of Ncolumn phase shifters or variable gain amplifiers. Then NTXRU weighted signals are combined at each antenna element. In 2D full-connection the output signal of each TXRU is split into Ncolumn NrowP signals, and those signals are precoded by a group of NcolumnNrowP phase shifters or variable gain amplifiers. Then, Q weighted signals are combined at each antenna element. An illustration of 1D subarray partition and full-connection as well as gen- eral FD-MIMO architectures are shown in Fig. 4.69 [47]. Multi-antenna systems with a large number of base station antennas, often called massive MIMO, have received much attention in academia and industry as a means to improve the spectral efficiency, energy efficiency, and processing complexity of the cellular systems. Transceiver array boundary Radiated interface boundary TXRU 1 radio TXRU 2 distribution antenna network array TXRU K Transceiver array boundary Composite antenna Active antenna system connector (TAB) transceiver unit array Figure 4.69 Illustration of transceiver architectures in FD-MIMO. 578 Chapter 4 While massive MIMO is a promising technology, there are many practical and technical challenges on the path to its successful commercialization, including design and implemen- tation of low-cost and low-power base station with large antenna arrays, capacity improve- ment of fronthaul links between remote radio heads and baseband units, measurement and reporting of high dimen
sional/resolution CSI, etc. One of the main features of FD-MIMO systems is the potential to use a large number of antennas at the base station. Theoretically, as the number of base station antennas increases, the cross-correlation of two random channel realizations approaches zero; thus inter-user interference in the downlink can be controlled via a simple linear precoder. However, such a benefit can be realized only when the perfect CSI is available at the base station. While the CSI acquisition in TDD systems is relatively simple due to the channel reciprocity, that is not the case for FDD systems, because the time variation and frequency response of the channel in FDD systems are measured via the downlink reference signals and fed back to the base station after quantization. Even in TDD mode, one cannot always rely on channel reciprocity because the measurement at the transmitter does not capture the downlink inter- ference from neighboring cells or co-scheduled UEs. As such, downlink reference signals are still required to capture the CQI for the TDD systems. As a result, downlink reference signals and uplink CSI feedback are crucial for operation of both duplex schemes. A common problem in closed-loop MIMO systems, and in particular FDD systems, is that the quality of CSI is affected by limitation of the feedback resources. As CSI distortion increases, the MU-MIMO precoder's capability to control the inter-user interference is degraded, resulting in performance degradation of the FD-MIMO system. In general the amount of CSI feedback, which determines the quality of CSI, needs to be scaled with the number of transmit antennas of the base station to control the quantization error, while limiting the overhead of CSI feedback to avoid adverse impact on the system performance. An important problem related to CSI acquisi- tion at the base station is the reference signal overhead. The UE performs channel estimation using the reference signals transmitted from the base station. Since the reference signals are typica
lly distinguished through their orthogonal signatures, their overhead grows linearly with the number of transmit antennas. As we mentioned earlier, FD-MIMO systems employ 2D planar arrays; thus propagation in both vertical and horizontal directions as well as the geometry of the transmitter array and the propagation effect of the 3D objects between the base station and the mobile station should be taken into account in channel modeling. The 3D channel propagation behavior obtained through measurements show the effect of height and distance-dependent LoS chan- nel and the fact that LoS probability between the base station and the UE increases with the UE's height and increases when the distance between them decreases. Further it shows the effect of height-dependent path loss where the UE experiences less path loss on a higher floor (e.g., 0.6 dB/m gain for a macrocell and 0.3 dB/m gain for a micro cell). The height New Radio Access Physical Layer Aspects (Part 2) 579 and distance-dependent elevation spread of departure (ESD) angles effect is exhibited when the location of the base station is higher than the UE, ESD decreases with the height of the UE and as the UE moves away from the base station [46,47]. FD-MIMO systems make use of a beamformed reference signals for CSI acquisition. Beamformed reference signal transmission is a channel training technique that uses multiple precoding weights in the spatial domain. In this scheme, the UE selects the best weight among those transmitted and then feeds back this index. This scheme provides many benefits com- pared to the case with non-precoded reference signals and especially when the number of transmit antennas is large. It can be shown that this scheme has less uplink feedback overhead relative to the case with perfect CSI, where the number of feedback bits used for channel vec- tor quantization is linearly proportional to the number of transmit antennas, whereas the amount of feedback for the beamformed reference signals scales logarithmically with the num- ber of
reference signals, because the UE only feeds back the index of the best beamformed ref- erence signal. It can be further shown that there is less downlink pilot overhead when the non- precoded reference signal is used. The non-beamformed reference signal overhead increases with the number of transmit antennas, resulting in substantial loss of sum capacity in the FD- MIMO, whereas the beamformed reference signal overhead is proportional to the number of reference signals and independent of the number of transmit antennas; therefore, the rate loss of the beamformed reference signals is marginal even when the number of transmit antennas increases [46,47]. As we mentioned earlier, an AAS transceiver contains integrated PA and LNA SO that the gNB can control the gain and phase of individual antenna elements. A radio signal distribution/com- bining network between TXRUs and antenna elements was introduced (see Fig. 4.69) whose role is to deliver the transmit signal from the PA to the antenna array elements and the received signal from the antenna array to the LNA. Depending on the CSI-RS transmission and feedback mechanism, two architecture options, array partitioning and array connected, may be used. The former architecture is more suitable for the conventional codebook scheme, and the latter is for the beamforming scheme. In the array partitioning architecture, antenna elements are divided into multiple groups, and each TXRU is connected to one of them, whereas in the array con- nected architecture, the radio distribution network is designed such that the RF signals of multi- ple TXRUs are delivered to the single-antenna element. To combine RF signals from multiple TXRUs, additional RF combining circuitry is needed. In the array partitioning architecture, the total number of antenna elements L is partitioned into several groups of TXRUs, and an orthog- onal CSI-RS is assigned for each group. Each TXRU transmits its own CSI-RS SO that the UE can measure channel h from the CSI-RS observation. In the array connected arc
hitecture, each antenna element is connected to <L TXRUs and an orthogonal CSI-RS is assigned for each TXRU. Denoting as the channel vector and veCNX1 as the precoding vector for each beamformed CSI-RS, the beamformed CSI-RS observation can be expressed as y = hvx + n and the UE measures the precoded channel hv. Due to the narrow and directional 580 Chapter 4 CSI-RS beam transmission with a linear array, the SNR of the precoded channel is maximized at the target direction, that is, SNR = hv()| where P is the beam direction and o2 is the noise variance. In non-beamformed scenario, the UE selects and sends a precoder index which maximizes certain performance criterion to the gNB and adapts to the channel variation. In the beamformed scenario, the gNB transmits multiple beamformed CSI-RSs using the connected array architecture and the UE selects the preferred beam and then feeds back its index. When the gNB receives the beam index, the weight corresponding to the selected beam is used for data transmission to the UE [48]. Let us consider a cellular system consisting of Ncell cells each with one base station and NUE terminals in each cell, as shown in Fig. 4.70. Each gNB is equipped with a 2D antenna array of Nv X NH vertical and horizontal antennas, and each UE has a single antenna. We assume that all gNBs and UEs are synchronized and operate in TDD mode with universal frequency reuse. In the downlink, the 1th base station applies a NvNH X NUE precoder Fn, ,Ncell to transmit a symbol for each user, with a power constraint NUE. Uplink and downlink channels are assumed to be reciprocal. If hnck denotes the NyNH X 1 uplink channel from user k in cell C to the nth base station, then the received signal by this user in the downlink can be Cell 1 Nucolumns Cell 2 Full-dimension Cell n antenna array Precoding matrix N X N H X N UE NU X 1 UE Transmitted signal vector Cell Ncell-1 Nv X N H x1 N(0,1) Cell Ncell Channel vector Additive white (1 indicates the number of UE antennas) gaussian noise Figure 4.70 Illustration of ful
tor, with rank(A) representing the rank of the matrix A [48]. The SINR at the kth user receiver in cell C can be shown to be as follows: Received signal power of the kth UE (P/NUE) " Receiver noise Sum of interference from the signal Sum of interference from the signal transmited to all other UEs in cell C transmitted by all neighboring cells ( interference) interference) The objective is to design the precoding matrices Fn, n=1,2,.. ., Ncell such that they mini- mize inter-cell interference with minimal requirements on the channel knowledge, and they 17 In the theory of stochastic processes, the Karhunen-Loève - theorem is a representation of a stochastic process as an infinite linear combination of orthogonal functions. The transformation is also known as Hotelling transform and eigenvector transform, and it is closely related to principal component analysis (PCA). In con- trast to a Fourier series where the coefficients are fixed numbers and the basis functions are sinusoidal the coefficients in the Karhunen-Loève theorem are random variables and the basis functions depend on the process. In fact the orthogonal basis functions used in this representation are determined by the covariance matrix of the process. Therefore, the Karhunen-Loève transform adapts to the process in order to produce the optimal basis for its expansion. In the case of a centered stochastic process {X(t)|te[a,b]}, that E[X(t)] = [a,b], satisfying a continuity condition, it can be shown that X(t) can be expanded as X(t) = where Zk's are pairwise uncorrelated random variables and the functions ek(t) are contin- uous real-valued functions on [a, b] that are pairwise orthogonal in L2[a, b]. It is therefore sometimes said that the expansion is bi-orthogonal since the random coefficients Zk are orthogonal in the probability space while the deterministic functions ek(t) are orthogonal in the time domain. The general case of a process X(t) that is not centered can be converted into a centered process by considering X(t) - E[X(t)], which is a cent
ered pro- cess. If the process is Gaussian then the random variables Zk are Gaussian and stochastically independent. This result generalizes the Karhunen-Loève transform. An important example of a centered real stochastic process on [0, 1] is the Wiener process, where the Karhunen - Loève theorem can be used to provide a canon- ical orthogonal representation for it. In this case, the expansion consists of sinusoidal functions [22]. 582 Chapter 4 can be implemented using low-complexity hybrid analog/digital architectures, that is, with a small number of RF chains. The main idea of multi-layer precoding is to design the preco- der matrix as a product of three precoding matrices (layers) where each layer is designed to achieve only one precoding objective, for example, maximizing desired signal power, mini- mizing inter-cell interference, or minimizing multi-user interference [48]. Inter - cell interference management Desired signal Multi - user interference management beamforming Minimize inter - cell interference Minimizing multi - user interference 4.1.9.5 Large-Scale (Massive) MIMO Systems Massive MIMO is the generalization of a multi-user MIMO system that serves multiple users through spatial multiplexing over a channel with favorable propagation 18 conditions using time-division duplex scheme and relying on channel reciprocity and uplink refer- ence signals to obtain CSI of each user. The base station is equipped with Nantennas anten- nas to communicate with Nuser (typically modeled with single-antenna) UEs on each time/ frequency resource, where Nuser <<Nantennas. Each base station in the network operates indi- vidually and processes its signals using linear transmit precoding and linear receive com- bining [16,52]. By coherent processing of the signals over the array, transmit precoding can be used in the downlink to focus each signal at its target user, and receive combining can be used in the uplink to distinguish between signals received from different user term- inals, thus the larger the number of anten
nas, the finer the spatial precision. A generic massive MIMO system operates in TDD mode, where the uplink and downlink transmis- sions take place on the same frequency resource but are separated in time. The physical propagation channels are reciprocal, meaning that the channel responses are theoretically the same in both directions, which can be utilized in TDD operation. In practice, the trans- ceiver hardware is not reciprocal, thus transceiver calibration is required to exploit the channel reciprocity. Since uplink-downlink hardware mismatches only slowly and slightly change over time, they can be mitigated by simple calibration methods even with- out extra reference transceivers by relying on mutual coupling between antennas in the array. There are several reasons for the suitability of the TDD mode for massive MIMO which include the following [16,52]: The base station needs to know the CSI to process the antennas coherently. Favorable propagation means that the channel matrix between the base station antenna array and the users is well-conditioned. In a massive MIMO system, under some conditions, the favorable propagation property holds due to the law of large numbers. In other words, the propagation is said to be favorable when users are mutually orthogonal in some practical sense. New Radio Access Physical Layer Aspects (Part 2) 583 The uplink channel estimation overhead is proportional to the number of terminals and independent of the number of antennas, making the scheme scalable with respect to the number of antennas. Furthermore, basic estimation theory indicates that the estimation quality (per antenna) cannot be reduced by adding more antennas at the base station. In fact the estimation quality improves with the number of antennas, if there is a known correlation structure between the channel responses over the array. The data transmission in massive MIMO is based on linear processing at the gNB. In the uplink, the gNB has NRX observations of the multiple access channel from the Nuser term- inals.
The gNB applies maximal ratio combining to separate the signal transmitted by each terminal from the interfering signals, using the channel estimate of a terminal to maximize the signal power of that terminal by coherently adding the signal components. This results in a signal amplification proportional to NRX, which is known as the array gain. Alternatively, ZF combining can be used, which suppresses inter-cell interference at the cost of reducing the array gain to NRX - Nuser + 1, or MMSE combining can be utilized that balances between amplifying signals and suppressing interference. Receive combining cre- ates one effective scalar channel per terminal where the intended signal is amplified, and/or the interference is suppressed. The performance of the received combining methods will be improved by adding more gNB antennas, since there are more channel observations to uti- lize. The remaining interference is typically treated as additive noise; thus, conventional single-user detection algorithms can be applied. Another benefit of the combining is that small-scale fading averages over the array, in the sense that its variance decreases with NRX. This is known as channel hardening and is a consequence of the law of large numbers. Since the uplink and downlink channels are ideally reciprocal in TDD systems, there is a strong connection between receive combining in the uplink and transmit precoding in the downlink. This is known as uplink-downlink duality. Linear precoding based on MRC, ZF, or MMSE principles can be applied to focus each signal on its target user and possibly to minimize interference toward other users [16,52]. It can be shown that the achievable spectral efficiency per cell of massive MIMO systems under ideal conditions and independent identically distributed Rayleigh fading can be expressed in the following form [16,52]: bps/Hz/Cell, where (1 - Nuser/T) is the loss due to pilot transmission, V is the downlink/uplink SNR, and ECSI is the quality of the estimated CSI, proportional to the mean-squar
ed power of the MMSE channel estimate, where ECSI = 1 represents perfect CSI. Note that the numerator of the logarithm argument increases proportionally with respect to NTX due to the array gain 584 Chapter 4 and that the denominator represents the interference plus noise. While the generic theory of massive MIMO systems assumed single-antenna terminals, the technology can support term- inals with N'RX antennas. In this case, Nuser denotes the number of simultaneous data streams and the preceding equation describes the spectral efficiency per stream. These streams can be divided over anything from Nuser/N'RX to Nuser terminals [16,52]. We discussed the capacity of MIMO systems earlier in this chapter, which can be written as follows: min(Nrx,NRX) In the preceding equation, it is assumed that transmitter has the full knowledge of CSI and that the channel matrix H can be decomposed using SVD method, where o?'s are the eigen- values of HH In the preceding equation, if the SNR r is extremely small, the capacity asymptotically approaches C7-0~~NRx/ In 2, which is independent of NTX, thus, even under the most favorable propagation conditions, the multiplexing gains are lost, and from the perspective of achievable rate, multiple transmit antennas are of no value. Next, let the number of transmit antennas grow large while keeping the number of receive antennas con- stant. We further assume that the row vectors of the channel matrix are asymptotically orthogonal, CNTX>>NRX NRx10g2(1+y). Then, let the number of receive antennas increase while keeping the number of transmit antennas constant. We also assume that the column vectors of the channel matrix are asymptotically orthogonal, NTxlog2 (1 + yNRx/ NTx). Therefore, an excess number of transmit or receive antennas, combined with asymptotic orthogonality of the propagation vectors, constitutes a highly desirable scenario. Additional receive antennas continue to improve the effective SNR and could in theory compensate for a low SNR and restore multiplexing gains that would
otherwise be lost. Furthermore, orthogonality of the propagation vectors implies that independent and identically distributed complex-Gaussian inputs are optimal SO that the achievable rates are in fact the true channel capacities [61]. The studies on massive MIMO have been mainly focused on frequencies below 6 GHz, where the transceiver hardware technology is very mature. The same concept can be applied in mmWave bands, where many antennas might be required since the effective aperture of the antenna is much smaller. However, the hardware implementation will be more challeng- ing. The support of mobility will be more difficult because the coherence time will be an order of magnitude shorter due to higher Doppler spread, which reduces the spatial multi- plexing capability. The channel impulse response between a user terminal and a base station can be represented by an Nantennas dimensional vector. Since the Nuser channel vectors are mutually non- orthogonal in general, advanced interference cancellation receivers are needed to suppress New Radio Access Physical Layer Aspects (Part 2) 585 interference and achieve the sum capacity of the multi-user channel. As we mentioned ear- lier, the favorable propagation is an environment where the Nuser users' channel vectors are mutually orthogonal (i.e., their inner products are zero). The favorable propagation channels are ideal for multi-user transmission since the interference is removed by simple linear pro- cessing (i.e., MRC and ZF) that utilizes the channel orthogonality. The question is whether there are any favorable propagation channels in practice. An approximate form of favorable propagation is achieved in non-LOS environments with rich scattering, where each channel vector has independent stochastic entries with zero mean and identical distribution. Under these conditions, the inner products (normalized by Nantennas) approach to zero as the num- ber of antennas increases; meaning that the channel vectors tend to be orthogonal as Nantennas increases. The suffici
ent condition above is satisfied for Rayleigh fading channels, which are considered in the studies on massive MIMO, but approximate favorable propaga- tion can also be obtained in other conditions [16,52]. The conventional open-loop beamforming provides meaningful array gains for small arrays in LoS propagation environments; however, this scheme is not scalable and not able to han- dle isotropic fading (isotropic fading encompasses a broad range of fading channels with the common property that the transmitter is unable to track the directions of the users' time-varying channel vectors. Thus, from the transmitter standpoint all directions are statis- tically equivalent). In practice, the channel of a particular user terminal might not be isotro- pically distributed, rather it might have distinct statistical spatial properties. The codebook in open-loop beamforming cannot be tailored to a specific terminal, rather needs to explore all channel directions that are possible for the array. For large arrays with arbitrary propaga- tion properties, the channels must be measured by reference signals as is done in the mas- sive MIMO [16,52]. The studies on massive MIMO were mainly focused on the asymptotic regime where the number of service antennas Nantennas 00. Recent studies have derived closed-form achiev- able spectral efficiency expressions that are valid for any number of antennas and user term- inals, SNR, and choice of reference signals. Those expressions do not rely on idealized assumptions such as perfect CSI, rather on worst case assumptions regarding the channel acquisition and signal processing. Although the total spectral efficiency per cell is greatly improved with massive MIMO, the anticipated performance per user lies in the conventional range of 1-4 bps/Hz. This is part of the range where conventional channel codes perform close to the Shannon limits. There are no strict requirements on the relation between Nantennas and Nuser in massive MIMO systems. A simple definition of massive MIMO would be a system
with many active antenna elements that can serve a large number of user terminals. One should avoid specify- ing a certain ratio Nantennas/Nuser since it depends on a variety of conditions such as the system performance metric, propagation environment, and coherence block length. 586 Chapter 4 The massive MIMO gains do not require high-precision hardware; in fact, lower hardware precision can be handled compared to other systems since additive distortions are sup- pressed in the processing. Another reason for the robustness is that massive MIMO can achieve high spectral efficiencies by transmitting low-order modulations to a multitude of terminals, while contemporary systems require high-precision hardware to support transmis- sion of high-order modulations to a few terminals [16,52]. In an OFDM system, resource allocation means that the time-frequency resources are divided between the terminals to satisfy user-specific performance constraints, finding the best subcarriers for each terminal, and overcoming the small-scale fading effects by power control. Frequency-selective resource allocation can provide significant improvements when there are large variations in channel quality over the subcarriers, but it is also demanding in terms of channel estimation and computational overhead since the decisions depend on the small-scale fading, which varies in time. If the same resource allocation concepts were applied to massive MIMO systems, with tens of terminals at each of the thousands of sub- carriers, the system complexity would have been excessively prohibitive. However, the channel hardening effect in massive MIMO means that the channel variations are negligible over the frequency-domain and mainly depend on large-scale fading in the time domain, which typically varies much slower than small-scale fading, making the conventional resource allocation concepts unnecessary for massive MIMO. The available bandwidth can be simultaneously allocated to each active terminal, and the power control decisions are made joint
ly for all subcarriers based only on the large-scale fading characteristics [16,52]. Thus, the resource allocation can be greatly simplified in massive MIMO systems. 4.1.9.6 NR Multi-antenna Transmission Schemes Multi-antenna transmission and beamforming of control and traffic channels are the distinct features of the new radio relative to its predecessors. In above 6 GHz frequency bands, the large number of antenna elements is primarily used for beamforming to achieve enhanced coverage, while at lower frequency bands, they enable FD-MIMO and interference avoid- ance by spatial filtering. The NR physical channels and signals, including those used for control and synchronization, have all been designed for beamforming. Unlike LTE whose downlink control channels used transmit diversity to ensure sufficient control channel link budget, NR control channels rely on a single-antenna port and beamforming to achieve COV- erage requirements. The CSI for operation of massive MIMO schemes can be obtained by feedback of CSI reports based on transmission of CSI reference signals in the downlink, either per antenna element or per beam, as well as using uplink measurements exploiting channel reciprocity. In order to simplify the implementation, the new radio supports analog beamforming in addition to digital precoding and beamforming. The support of analog beamforming, where the beam is shaped after digital-to-analog conversion, is necessary in high frequencies. Analog beamforming requires that the receive or transmit beams to be New Radio Access Physical Layer Aspects (Part 2) 587 formed in one direction at a given time and further requires beam-sweeping, in which the same signal is repeated over multiple OFDM symbols and in different transmit beams. This is to ensure that control/traffic signals can be transmitted with high gain to sufficiently cover the service area of the base station. Beam management procedures and signaling are further specified in NR including indication to the device to assist selection of the receive b