3
Radio Access Technology

Sami Hakola1, Toni Levanen2, Juho Pirskanen3, Karri Ranta‐aho3, Samuli Turtinen1, Keeth Jayasinghe4, and Fred Vook5

1Nokia, Oulu, Finland

2Tampere University, Tampere, Finland

3Wirepas, Tampere, Finland

4Nokia, Espoo, Finland

5Nokia, Naperville, USA

3.1 Evolution Toward 5G

3.1.1 Introduction

Cellular technologies have developed tremendously in past decades. The global system for mobile communications (GSM) technology as the best known 2G technology was highly successful for voice service, changing completely our way of communicating over telephone and being available everywhere and at any time. GSM also laid the foundation for cellular data services with the introduction of general packet radio service (GPRS). The 3G, i.e. Universal Mobile Telecommunications System (UMTS) with High Speed Downlink Packet Access (HSDPA) and High Speed Uplink Packet Access (HSUPA) features, started the real data boom with variety of different data applications in devices. At the same time different companies introduced new mobile phone types, more capable for data communication with different data application compared to earlier generation smartphones and without a real keypad like the iPhone. This data boom was further emphasized by introduction of Long‐Term Evolution (LTE) and formally 4G with Long‐Term Evolution Advanced (LTE‐A) features. The development in cellular technologies as well as development in wireless local area networks (WLANs), i.e. WLAN802.11 changed our thinking on data connectivity, availability of the connectivity, as well as mobility and using mobile data services also in outside of the home country.

The first 3G UMTS Release ‘99 supported theoretical data rates of 2 Mbps in downlink (DL) and 768 kbps in uplink (UL), however, practical downlink data rates where around 384 kbps with uplink data rates limited many times between 64 and 128 kbps (see [1,2]). Data rates in downlink increased with HSDPA to 7.2 Mbps and up to 14.4 Mbps in 3GPP Release 5 and in uplink by HSUPA from 1.4 Mbps to 5.7 Mbps in Release 6. Downlink data rates of High Speed Packet Access (HSPA), which is the combination of HSDPA and HSUPA, where further increased by dual carrier operation that doubled the available utilized bandwidth to 10 MHz and the corresponding data rates. The introduction of 64QAM modulation added another 50% to the peak data rates.

Instead of 5 MHz channel bandwidth utilized by HSPA, LTE introduced a significant step upward in the maximum channel bandwidth by supporting 20 MHz channels, which was set as a minimum User Equipment (UE) capability already at the introduction of LTE in 3GPP Release 8. The change of the waveform and access technology from Direct Spread Code Division Multiple Access (DS‐CDMA) to Cyclic Prefix Orthogonal Frequency‐Division Multiple Access (CP‐OFDMA) in downlink (DL) and Discrete Fourier Transform‐spread‐Orthogonal Frequency‐Division Multiplexing (DFT‐s‐OFDM) in uplink, allowed better utilization of Multiple Input Multiple Output (MIMO) technique in downlink. The use of MIMO and increased bandwidth resulted in DL data rates from 100 to 150 Mbps in practical handheld Release 8 devices. Respectively uplink data rates were ranging from 25 to 50 Mbps. These data rates were further increased in 3GPP Release 10 mainly by the Carrier Aggregation (CA) feature adding more bandwidth to the radio link. The introduction of Release 10 is also known as LTE‐A or 4G, as LTE Advanced was the official submission from 3GPP as a Radio Interface Technology (RIT) for 4G to the International Telecom Union Radio sector (ITU‐R). After Release 10, next releases of LTE introduced new capabilities for the UE by extending the number of carriers that can be simultaneously aggregated to the UE. Thus, LTE data rates have been further increased, depicted in the Figure 3.1, which presents uplink and downlink data rates from 3GPP Release ‘99 to Release 12.

Graph of data rate evolution from 3GPP UMTS Release ′99 to 3GPP LTE Release 12, displaying 4 ascending lines for typical DL and UL data rates (Mbps) and high end DL and UL data rates (Mbps).

Figure 3.1 Data rate evolution from 3GPP UMTS Release '99 to 3GPP LTE Release 12.

In addition to carrier aggregation, the future releases of LTE introduced several other features such as Dual Connectivity, full dimension MIMO, Coordinated Multipoint Transmission (CoMP), Machine Type Communication (MTC) and Narrow band Internet of Things (NB‐IoT). Many of these features are conceptually used as part of 5G technology.

LTE originally focused on mobile broadband (MBB) use cases, MTC and narrow band IoT brought new uses cases into the picture by introducing optimized support for low power IoT devices connecting to cellular networks. In 5G, in addition to enhanced Mobile Broadband (eMBB) and massive Machine Type Communication (mMTC), the new Ultra Reliable and Low Latency Communication (URLLC) use case has been considered.

In eMBB, it is evident that for high quality of experience for the end user, low communication latency and high throughput are essential. However, eMBB services such as fast file transfer, video streaming in high‐definition quality, and different mobile applications can tolerate long delays and errors in data transfers that are then recovered by re‐transmissions. However, the URLLC service is characterized by requiring both very high reliability of the communication and latency that is at such a low level that possibilities of performing re‐transmission is very limited or not possible at all in radio protocols, on TCP/IP or application level.

The utilization of cellular technologies for self‐driving cars, robots, factory automation to name a few new applications, has raised significant interest in URLLC. However, it can be expected that many URLLC applications will be introduced in industrial environments where externally controlled machines, such as cranes, trucks, or forklifts, are operating in restricted areas. The break of communication could then lead to slow down of the operational speed of the machine, full stop or even emergency stop causing even damages to the machine. Depending on issues caused by the communication loss, such cases should be avoided especially when operation is required 24 hours a day and 7 days a week. The radio solutions designed for a URLLC communication area are discussed with overall system solutions in Chapter 8.

3.1.2 Pre‐Standard Solutions

Before the 3GPP standardization process started to create a globally harmonized technology standard for 5G, several companies published proposals for a future radio interface design, such as presented in [3,4]. Additionally, different industry players demonstrated 5G technology solutions and benefits of the features with proprietary Proof‐of‐Concept (PoC) implementations and test trials (see [5,6]).

Two major telecom operators, Korean Telecom (KT) in South‐Korea and Verizon Wireless in the US, called several network vendors and UE chipset vendors to define pre‐standard 5G solutions. In the case of KT, the work was done in the KT PyeongChang 5G Special Interest Group (5G‐SIG) to realize the world's first 5G trial service at PyeongChang 2018 Olympic Winter Games. In the US, the work was organized in the Verizon 5G Technology Forum (5GTF) targeting for a special commercial use case. As most companies working in KT 5G‐SIG and 5GTF were the same, several physical layer (PHY) solutions were identical even though separate specifications were published (see [7,8]).

The main difference between these two pre‐standard solutions were that KT 5G‐SIG aimed for seamless mobility between LTE and the pre‐standard 5G trial service, while 5GTF was designed with a Fixed Wireless Access (FWA) use case in mind. The selected user protocol architecture depicted in Figure 3.2 was designed to support dual connectivity (DC) between commercial LTE radio access and the KT 5G‐SIG radio solution. This new service would only be available for selected customers with new handheld devices able to support Korean Telecom‐Special Interest Group (KT‐SIG) defined pre‐5G radio technology.

KT 5G-SIG user plane protocol architecture with arrows labeled bearer over 5G/LTE and bearer over LTE pointing to 5G node and eNB, respectively. The arrows lead to 5G-SIG PHY layer and LTE PHY layer.

Figure 3.2 KT 5G‐SIG user plane protocol architecture [9].

On the other hand, the work in 5GTF aimed to deliver a fiber over wireless type of service. In this service a very high data throughput radio link is used to provide last hop connection for home and offices, installing instead a fiber cable. In this solution fixed Customer Premises Equipment (CPE) devices, terminates the 5GTF radio connection and provides backhaul link to end users at home and office networks as shown in Figure 3.3. The benefit of this solution is that it allows end users to utilize their existing devices with Wi‐Fi radio connection to connect to the CPE and then to the Internet. Additionally, the installation costs in homes and offices is limited compared to fixed fiber cabling costs.

5G-TF system architecture displaying 5GTF-eNB at the middle of 7 houses with CPE devices.

Figure 3.3 5G‐TF system architecture.

Due to the different uses cases, KT‐SIG and 5GTF utilized different architecture options. However, to avoid extensive specification and implementation work, major parts of radio protocols and functions were harmonized between 5G‐SIG and 5G‐TF. The harmonization was done in the PHY, as well as higher layers whereever it was possible. Both systems supported:

  1. – Operation at 28 GHz
  2. – 100 MHz channel bandwidth, with maximum of eight aggregated carriers
  3. – 5 Gbps as maximum DL data rate
  4. – Massive‐MIMO with support to hybrid beamforming Transmitter Exchange/Receiver Exchange (TX/RX) architecture for both UE and Base Station (BS) receivers and transmitters.
  5. – CP‐OFDMA for both UL and DL direction.
  6. Subcarrier spacing (SCS) of 75 kHz with 2048 fast Fourier transform (FFT) size.
  7. – 0.2 ms frame length with support of bi‐directional frame as shown in Figure 3.4.
  8. low‐density parity check (LDPC) channel coding for data channel.
  9. Tail‐Biting Convolutional Coding (TBCC) for control channels.
  10. – Support of TX and RX beamforming of all channels including synchronization channels.
  11. – Beam management for beamformed data transmissions.
  12. – Concatenation on Medium Access Control (MAC), for optimized L2 processing for high date rates.
Bi-direction frame type in KT 5G-SIG and 5GTF represented by a series of linked boxes labeled DL Cont. (0), DRMS (1), DL/UL Data (2-11), GP (12), and UL cont. (13).

Figure 3.4 Bi‐direction frame type in KT 5G‐SIG and 5GTF.

When considering supported features defined for KT‐SIG and 5GTF pre‐standard solutions and features defined in 3GPP Rel15, one can observe that many features defined in these pre‐standard solutions are also supported by 3GPP specifications. However, there are several differences in how these features are defined in detail.

Reasons for these differences, are manifold, for 5G New Radio (NR) standard in 3GPP needs to support carrier frequencies from 600 to 42 GHz, instead of a single 28 GHz band. Instead of single use case implementations supported by pre‐standard solutions, cost and energy consumption optimized multimode NR and LTE device and BS implementations are expected. The pre‐standard solutions also took many design choices as quick engineering decisions motivated by fast implementation rather than optimizing performance, energy consumption or generic adaptability. In the following sections, we go through the 3GPP Release 15 radio solution in more details, starting with basic building blocks of PHY and then discussing the actual realization of the 3GPP Release 15 radio interface covering also the radio protocols.

3.2 Basic Building Blocks

3.2.1 Waveforms for Downlink and Uplink

In the 5G research phase different waveform options were extensively evaluated. The main motivation for these studies was to find new waveform solutions for improving frequency localization, compared to CP‐OFDM while keeping the benefits of CP‐OFDM, such as simple channel estimation and equalization, good support for MIMO, and flexible multiplexing of users in the frequency domain (FDM). At the same time, the waveform should avoid high peak‐to‐average power ratio (PAPR), extensively complex solutions, and preferably maintain the good time resolution of CP‐OFDM with low inter‐symbol interference.

Additionally, studies considered NR propagation characteristics of between 6 and 52.6 GHz and frequencies above 52.6 GHz. In high frequencies, the radio environment is different compared to traditional cellular bands especially in terms of reduced delay spread, significantly higher attenuation of non‐line of sight (NLOS) radio links.

The improved frequency localization is motivated by two main drivers. First, there was a clear desire to improve the spectral efficiency of the 5th generation telecommunication system by reducing required guard bands and improving spectral utilization from LTE where 90% spectral utilization was achieved in most of the channels. This implies that, in a LTE 20 MHz channel 2 MHz is dedicated for guard bands and only 18 MHz is used for data transmission. Secondly, improved spectral localization is especially beneficial when multiple different technologies, waveforms or OFDM numerologies are operating in parallel on the same band, as it reduces the guard bands needed between different technologies or numerologies. Furthermore, improved spectral localization would be beneficial when contention‐based access with unsynchronized transmission is supported in the uplink. Improved spectral localization is needed to reduce interference caused by unsynchronized transmission occurring at adjacent OFDM subcarriers in the same subframe with synchronized uplink transmission.

The main candidate waveforms that gained interest after the research phase were conventional CP‐OFDM, CP‐OFDM with windowed overlap‐and‐add (WOLA) processing, universally filtered multicarrier (UFMC), and filtered orthogonal frequency division multiplexing (f‐OFDM). These waveforms were mainly considered for below 6 GHz frequency bands, where channel bandwidths are narrower and spectrum availability is more limited compared to higher frequencies. The common nominator for considered waveforms is that they build on conventional CP‐OFDM waveform, for which a transmitter is presented in Figure 3.5.

Conventional CP-OFDMA transmitter block diagram with data symbols (dots) and boxes labeled sub carrier mapping, IFFT, CP insertion, and parallel to serial.

Figure 3.5 Conventional CP‐OFDMA transmitter block diagram.

In Figure 3.6, the power spectral density (PSD) response for CP‐OFDM, WOLA, universal filtered orthogonal frequency division multiplexing (UF‐OFDM) and f‐OFDM are given in the case of one physical resource block (PRB) allocation at the edge of a 10 MHz channel edge while assuming a maximum of 50 PRB allocation, following the LTE numerology.

Image described by caption and surrounding text.

Figure 3.6 Example PSD realizations for different waveform candidates.

WOLA, which is based on cyclic extension and time domain (TDM) windowing of CP‐OFDM symbols combined with overlap‐and‐add processing, reduces the out‐of‐band spectral emissions by smoothing the transition from one CP‐OFDM symbol to another. In Figure 3.6, the PSD response for WOLA is shown for two different window slope lengths, Nws = 18 and Nws = 36, where window slope length defines the rising or falling slope length of the used TDM raised cosine window. The window length can be also expressed as a roll off, which in this case would correspond to r = 1.6% and r = 3.3% for window slope lengths Nws = 18 and Nws = 36, respectively.

For UF‐OFDM, Dolph‐Chebyshev window is used to define the TDM filter response which is adapted by two different parameters: length of the filter, NF, and minimum attenuation in the stopband, Amin (see [10,11]). By tuning these two parameters a trade‐off between filter length and filter's 3 dB bandwidth is obtained.

The f‐OFDM waveform is a Hann windowed sinc‐function‐based sub‐band filtered CP‐OFDM waveform (see [12]). The sinc‐function corresponds to the given allocation size and its TDM response is windowed with the well‐known Hann window. The filter length is half of the OFDM symbol length, and in this example, corresponds to NF = 512 samples. To adjust the in‐band distortion caused by the filtering, a tone offset (TO) has been introduced. Tone offset defines the increase in the passband width in terms of integer multiple of SCSs. Thus, in this case TO = 4 corresponds to 4*15 = 60 kHz increase in the passband width.

Additionally, different single carrier waveforms such as Discrete Fourier Transform (DFT)‐spread OFDM (DFT‐s‐ with CP or zero tail [ZT]) have been considered for uplink access and for higher millimeter wave frequencies. Here we consider centimeter wave frequencies to cover frequencies from 3 to 30 GHz and wave frequencies to cover frequencies from 30 to 300 GHz.

For millimeter wave frequencies, it is important to have very low PAPR, because the power amplifier (PA) efficiencies tend to drop as the carrier frequency increases. Therefore, to obtain reasonable power efficiency and emitted powers from handheld devices, minimization of the PAPR is critical. When considering DFT‐s‐OFDM, using pulse shaping allows to further reduce the PAPR of the transmitted signal. The pulse shaping filter is typically a root‐raised cosine filter, which is implemented in the frequency‐domain of the ZT DFT‐s‐OFDM transmitter, this improvement in PAPR comes at a cost of spectral efficiency as increasing the pulse shaping roll off factor reduces PAPR, and hence a trade‐off between PA efficiency and spectral efficiency needs to be carefully considered. On the other hand, at millimeter wave frequencies we can expect channel bandwidths up to 2 GHz, and therefore spectral efficiency is not that critical for achieving high throughputs. Furthermore, the ZT DFT‐s‐OFDM, allows to maintain symbol synchronization while allowing variable guard periods (GPs), e.g. for control and data signaling or for different users and is considered as a strong candidate as the waveform for beyond 52.6 GHz is studied in 3GPP Release 16.

During the 3GPP standardization process, different companies proposed different waveforms for evaluation, resulting in an industry agreement to relay on CP‐OFDM‐based waveforms in below 52.6 GHz communications (see [13]). Those were considered as the most suitable choice for the downlink, uplink, and device to device (D2D) transmissions in NR. For 5G MBB services, CP‐OFDM is well suited due to the provided good time‐localization properties enabling low latency and low‐cost receivers with good MIMO and beamforming performance. Additionally, it was demonstrated that different filtering schemes, such as UF‐OFDM and f‐OFDMA, can be introduced separately in transmitter and receiver units and it is not necessary to utilize same signal processing method in both ends.

The simple reason for this is that all the considered methods try to manipulate the out‐of‐band signal component in transmitter so that the signal energy in adjacent frequencies is reduced while avoiding distorting the desired in‐band signal. Similarly, in the receiver side, these techniques provide means to suppress the interference coming from adjacent channel and out‐of‐band signals without affecting desired in‐band signal. Therefore, these transmitter and receiver characteristics can be tested with separate TX and RX tests as shown in Figures 3.7 and 3.8 for transmitter and receiver respectively (see [14]). Furthermore, this testing approach allows that transmitter and receiver signal processing techniques to be independent, enabling separate development for transmitter and receiver implementations, which can open completely new avenue for future technology development.

Diagram of transmitter unit test setup depicting the flow from TX Unit under test to emission measurement and channel emulator, to reference RX, to signal quality evaluation.

Figure 3.7 Transmitter unit test setup.

Diagram of receiver unit test setup depicting the flow from reference TX unit to channel emulator, to RX unit under test, to signal quality evaluation and from interference source to RX unit under test.

Figure 3.8 Receiver unit test setup.

As different filtering solutions cannot totally avoid introduction of inter symbol interference (ISI) to an in‐band desired signal, it is beneficial to perform filtering only when needed. Especially in downlink, where transmissions for different UEs are synchronized, the transmitter should filter signal per numerology rather on than sub‐band level (e.g. per UE allocation and per numerology). When high modulation and coding scheme (MCS) is used in high Signal to Noise Ratio (SNR) conditions, the ISI becomes a dominating factor and thus filtering may become the limiting factor on the maximum performance due to increased transmitter error vector magnitude (EVM) (see [14,15]).

In previous mobile technology generations such as 2G and 3G, the waveform processing solutions were fixed by the standard, and the RX and TX processing solutions were limited to match the standard. In these cases, any changes to the standards would have required a new generation of mobile devices and network infrastructure operating on different bands than the originally used waveform. However, with the new transparent waveform processing approach both UE and BS implementations may introduce new TX solutions that improve signal spectrum containment and RX solutions to reduce adjacent carrier and out‐of‐band interference independently in fully backward compatible manner, with the requirement that transmitter and/or receiver may also be a conventional CP‐OFDM transmitter and/or receiver.

Due to above reasons, it is difficult to find a globally optimal solution, when considering all the different frequency bands supported by 5G NR. Due to the wide variety of use cases having e.g. different spectral containment, latency, and TX and RX signal quality requirements, it can be expected that several different solutions on top of conventional CP‐OFDM are applied in the network and in the UE. This is exemplifying the potential of transparent waveform processing in enabling 5G NR to support all the diverse use cases envisioned.

The 3GPP TSG‐RAN WG1 specifications are not defining any filtering scheme or requirements. Rather filtering requirements are set in 3GPP TSG‐RAN WG4 specification for below 52.6 GHz operation, when defining in‐band blocking and emission requirements for adjacent PRBs and neighboring channels. Similarly, TX and RX side filtering processing can be considered when defining out‐of‐band blocking and emission requirements for receiver and transmitter respectively.

In addition, to the spectrum containment discussion, the CP‐OFDMA's known limitation on having a considerably high PAPR, an additional support for single carrier waveform was considered to improve uplink coverage so that 5G uplink coverage would be comparable to LTE [16].

The conclusion was that DFT‐s‐OFDM was selected as an additional uplink waveform for single stream and low MCS transmission, targeted to be supported mainly in coverage limited scenarios. The support of DFT‐s‐OFDM is compulsory for UEs but optional for base stations. Therefore, the main difference compared to LTE uplink is that in NR CP‐OFDMA is the main option for the uplink waveform and DFT‐spread OFDM is an additional option, while being the only option for LTE uplink.

Release 15 covered communications in the frequency range up to 52.6 GHz [17]. Frequencies beyond 52.6 GHz are considered in Release 16. This study mainly considers operation with a single carrier like waveforms. The highest emphasis is on CP DFT‐s‐OFDM and ZT DFT‐s‐OFDM waveforms as they allow efficient (FDM) frequency domain processing and can share several TX and RX functions in the implementation with existing solutions.

As discussed, in high carrier frequencies the PA efficiencies tend to drop and low PAPR is a critical design target for waveforms evaluated for communications. CP‐OFDM could also be supported as an optional, short distance and high throughput waveform as in WLAN 802.11ad (see [18]). As 3GPP is designing a global mobile communication system, the DL and UL waveforms must provide sufficient coverage to allow reasonably dimensioned network implementations, and therefore minimizing PAPR allowing to maximize emitted signal power and PA efficiency while simultaneously minimize energy consumption, and PA cost is clearly the most important design parameter. Other important aspects are computational complexity, good time resolution, and zero prefix.

As the targeted throughputs are gigabits per second, the computational complexity per data bit should be smaller than in traditional carrier frequencies. This is already alleviated by the assumptions of a lower degree of spatial streams and lower modulation schemes to be used beyond 52.6 GHz. The number of spatial streams is typically assumed to be limited to two, which is achieved in Line of Sight (LOS) communications using two different polarizations. The lower modulation schemes are sufficient to achieve high throughputs as the channel bandwidths may be up to 2 GHz, allowing ultra‐high throughputs for end‐users even with binary phase sift keying (BPSK) modulation.

Good time resolution is critical for the desired waveform as in millimeter wave communications the main use case is assumed to be beamformed Time Division Duplex (TDD). Beamforming is required to overcome the increased pathloss at higher carrier frequencies and TDD is the most likely duplexing scheme with very narrow beams, as it is difficult to achieve reasonable multiplexing gains with Frequency Division Duplex (FDD).

One promising solution is to utilize ZT instead of CP, in DTF‐s‐OFDM transmission. This allows the transmitted symbol energy to drop near to zero between symbols, allowing transmitter to switch TX beams between symbols, enabling a highly efficient and agile beamforming for millimeter wave communications. Zero prefix is also easy to combine with pulse shaping filtering allowing to further reduce the PAPR of the transmitted signal while keeping low‐power symbol transitions. The above mentioned ZT DFT‐s‐OFDM is an excellent solution, as it allows to constantly maintain symbol synchronization in the system while fulfilling all the requirements and adding on top the possibility to adapt the ZT duration per symbol.

It is expected that π/2‐BPSK modulation will also play an important role in millimeter wave communications. It has been accepted to be used with CP DFT‐s‐OFDM‐based uplink in 5G NR. In frequency range (FR)1 (450–6 GHz) it plays a relatively small role, as devices can achieve the maximum allowed emitted power of 23 dBm even with Quaternary Phase‐Shift Keying (QPSK) modulation using CP‐OFDM. Although, the future high power (HP) UE classes, which can transmit with higher power using smaller duty cycles, may benefit from this modulation. On the other hand, in millimeter wave communications where the UE devices can achieve only small array gains in beamforming and use cheaper and lower efficiency PAs need to rely in most cases on π/2‐BPSK modulation to achieve good UL coverage. As in waveform design, the pulse shaping of π/2‐BPSK modulation was not explicitly defined in the specification. Instead, it is indirectly limited by the spectral flatness requirements imposed on the RX side.

3.2.2 Multiple Access

The 5G NR radio access is targeted to support both paired and unpaired spectrum with maximum commonalities of FDD used in paired spectrum and TDD in unpaired spectrum. The FDD transmission mode has been the dominant operational mode in previous cellular system, whereas TDD has been mainly used in China.

The FDD system requires separate frequency blocks for both DL and UL transmission with sufficient frequency gap allocated between them, so called duplex distance between DL and UL. The benefit of FDD is that with sufficient duplexing distance, the device's transmitter can be sufficiently isolated from the device receiver with a duplex filter allowing simultaneous UL transmission and DL reception. This is very suitable for URLLC as it minimizes both DL and UL latencies and Hybrid Automatic Repeat Request (HARQ) feedback loop delay is not bounded by DL to UL or UL to DL transmission direction switching.

The drawback of FDD is that separate spectrum allocation for DL and UL is needed with sufficient duplex distance between UL and DL portions on the same band. This can be very difficult due to the available spectrum band allocation. Additionally, the required duplex distance increases when carrier frequency increases with the resultthat above 3 GHz TDD starts to be more attractive and at high millimeter wave (mmWave) frequencies TDD is the only option. Finally, in FDD the spectrum resource allocation for UL and DL direction is fixed, as the allocation of the UL part for DL operation is not possible. Configurations, where distance between UL and DL part is not fixed, i.e. there are multiple possible downlink portions, or any downlink portion is possible for a single uplink band, can be considered as flexible duplexing approach, which sets new requirements for the duplex filtering.

In TDD, the transmission resources between UL and DL transmission are divided in time. The benefit of TDD is that only single spectrum allocation is needed as both uplink and downlink operate on the same frequency and no duplex distance definition is required, which is essential for frequencies above 6 GHz. This also allows TDD spectrum to be allocated between the FDD UL and DL frequency portions as shown in Figure 3.9. This is especially needed at spectrum allocations below 6 GHz trying to efficiently use the scarce spectral resources.

Illustration of spectrum allocation on 2.6 GHz in Germany, with TDD allocated between the FDD UL and DL frequency portion. Arrows indicate Telefonica and DTAG in TDD.

Figure 3.9 Spectrum allocation on 2.6 GHz in Germany.

In addition, in TDD operation the spectrum resource allocation is not fixed between UL and DL direction, as UL and DL frames can be dynamically allocated in NR by the packet scheduler of the Gigabit NodeB (gNB) based on capacity needs. This operation is often referred to as dynamic TDD. Utilizing spectrum between UL and DL frequency allocation of the FDD band and dynamic TDD has clear benefits, both have also some limitations and restrictions that set requirements for implementation and the system operation.

The TDD spectrum allocation between the FDD uplink and downlink portions is hindered as high power FDD downlink transmission can easily interfere significantly with UE DL reception and BS UL reception. Similarly, FDD uplink UE transmission may interfere with another UE TDD DL reception nearby. Finally, TDD UE and BS transmissions can cause interference to FDD UE reception. These interference conditions are most significant at the band edges, where out‐of‐band blocking requirements are toughest to meet. In LTE, these interference problems were relieved by reducing the actual operating bandwidth by not allocating resource blocks for UE transmission in UL. In LTE downlink transmission, the BS can either utilize additional proprietary TX filtering or non‐schedule resource blocks for DL transmissions. Both methods reduce bandwidth utilization of the spectrum and therefore, advanced waveform processing solutions in both TX and RX, aiming to improve spectral containment, become interesting techniques for enable better utilization of such bands.

In dynamic TDD, the dynamic resource allocation between UL and DL can provide significant performance improvements, as the resources can be fully dynamically allocated either for DL or UL depending on traffic needs. Even though, the statistical ratio between uplink and downlink traffic is 1 : 10 in today's Internet traffic, the actual capacity demand can change significantly. In urban area network deployments, events such as music concerts, sport events, etc. are found to create significant uplink traffic with different social media picture and video uploads from the event. In office or hotspot deployments the number of simultaneously active users can be low, thus the immediate capacity need of actual user applications will dominate the UL/DL capacity split needed. Therefore, the capacity requirement at a given moment in time is not based on a statistical distribution; rather statistical distribution may only be achieved over a longer time period. In both cases, it is apparent that a fixed allocation between UL and DL resource can easily lead to the situation where either direction can be highly congested, limiting the achievable data rate. Additionally, as the traffic is bi‐directional this may result in the under‐utilization of resources in the other direction.

However, due to very similar UL‐to‐DL and DL‐to‐UL interference scenarios, as in TDD band allocations between FDD UL and DL portions, the TDD UL/DL allocation is expected to be quite fixed in NR TDD macro deployments. In such cases, all BS will operate synchronized with the same DL and UL frame pattern ensuring that cross‐link interference does not occur. This will not only be needed between single operator BSs but also between operators in macro deployments without special antenna constellations. This results in the fact that DL and UL TDD configuration needs to be selected based on statistical distribution between DL and UL traffic. In higher frequencies, with small cell deployments and especially deployments with active antenna systems (AASs), the TDD configuration can more freely vary between different cells based on DL and UL traffic needs.

To enable dynamic TDD operation, several enabling design choices were made in the NR slot design and PHY operation. In the slot design, dynamic TDD is considered by defining such DL and UL control channel location options which do not suffer from cross‐link interference between frame synchronized cells, even though user plane traffic is allocated to different directions. Additionally, to enable dynamic TDD operation, the UE does not make any assumption whether a certain frame contains DL and UL data portions. Rather, it operates based on scheduling information in the physical downlink control channel (PDCCH). Furthermore, the channel estimation and radio measurement design does not assume any continuous reference symbols in fixed locations mandating certain DL transmissions. These design choices allow gNBs to freely choose the frame format based on actual capacity needs. This way the fixed TDD operation is achieved by network implementation dependent configurations that mandate fixed scheduling for UL and DL frames.

When the transmission direction is not fixed to any uplink and downlink transmissions pattern, dynamic TDD can be further leveraged to support in‐band backhaul in high mmWave deployments to simplify BS deployments as no fiber backhaul cabling would be needed. In this case, the time resources would be distributed between the data link and backhaul link, thus converting maximum throughput into deployment flexibility.

3.2.3 5G Numerology and Frame Structures

As discussed in Section 3.1.2, the pre‐standard solutions applied 75 kHz SCS, which is five times higher than in LTE, for supporting 28 and 39 GHz frequencies with specific use cases. However, in 3GPP the target was to support a wider range of spectrum as well as different use cases and system bandwidths. The supported spectrum ranging from 600 to 100 GHz and possible system bandwidths from 5 to 1 GHz or even up to 2 GHz as for future extensions in above 52.6 GHz millimeter wave frequencies. Additionally, numerology must support excellent co‐existence with LTE systems and allow economical implementation of multimode devices supporting both LTE and NR. Therefore, it become apparent that 75 kHz alone is not sufficient SCS even when considering spectrum below 40 GHz in the first phase of NR. Additionally, it was widely expressed by different UE vendors and gNB vendors to consider higher FFT/IFFT sizes than 2048 used in LTE for support of wider carrier bandwidths.

The requirement of supporting wide area cells and to have best possible co‐existence between LTE resulted in 15 kHz SCS support in NR. To allow economical implementation of multimode devices, 3GPP adopted so called 2N scaling of 15 kHz, where N is a positive integer, which results in SCS and nominal system bandwidth options in different frequency ranges as shown in Table 3.1.

Table 3.1 Subcarrier spacing, nominal BW and frequency range.

N for 15*2N Scaling 0 1 2 3 4
Subcarrier spacing (kHz) 15 30 60 120 240
Supported frequency range (GHz) <1–6 <1–6 1.7–52.6 24.25–52.6 24.25–52.6
PRB bandwidth (kHz) 180 360 720 1440
Max number of resource blocks 270 273 264 264
Max BW (MHz) 48.6 99.280 190.08 380.16
Max FFT size 4096 4096 4096 4096
T symbol (μs) 66.7 33.33 16.6 8.33 4.17
CP (μs) 4.7 2.41 1.205 0.6 0.3
#symbols per slot 14 14 14 14
Slot duration (ms) 1 0.5 0.25 0.125
#slots in frame 10 20 40 80 160

As mentioned, the 15 kHz SCS is motivated by supporting excellent co‐existence with LTE, as well as supporting very large cells in rural and sub‐urban areas with large delay spreads. Additionally, 15 kHz SCS is needed for supporting bands with a maximum system bandwidth of only 5 MHz with high spectrum efficiency. The minimum operation bandwidth per numerology is defined by the Synchronization Signal (SS) Block as discussed in Section 3.3.1.

The 30 kHz SCS is mainly targeted to urban macro cells where typical inter‐site distance (ISD) is below 1 km. Thus, it is anticipated that 30 kHz SCS would be directly usable in current urban macro sites in low carrier frequencies and is therefore the main deployment option in many below 6 GHz frequency bands.

The 60 kHz is SCS option in above 3 GHz carrier frequencies can be utilized even above 6 GHz. However, in high carrier frequencies the 60 kHz SCS sets sufficiently high requirements for oscillators to compensate carrier frequency and phase noise error. Therefore, 120 kHz SCS was also introduced and is the preferred numerology for above 24 GHz frequencies. Additionally, 120 kHz can provide 400 MHz system bandwidth with 4K FFT size, which is highly beneficial in above 24 GHz spectrum. To support a high number of synchronization signal and physical broadcast channel (PBCH) beams and a short sweeping procedure (see Section 3.3.1) the 240 kHz SCS was introduced for Synchronization Signal Block (SSBlock or SSB) transmissions (see Section 3.3.1).

A typical OFDM design is that cyclic prefix (CP) is approximately 5% of the symbol length introducing corresponding fixed overhead into the system. In addition to SCS and normal CP, an extended CP was considered. This was mainly proposed for 60 kHz SCS to extend possible coverage of the system operating with such short symbols in macro cell deployments, while maintaining the benefit of short slot length to minimize the delays of the system. However, the need for this option became significantly less important when the mini‐slot concept with a two‐symbol transmission length was introduced for frequencies below 6 GHz. Additionally, above 6 GHz a one symbol mini‐slot concept is supported (see [13]).

The basic building unit of NR radio access is the Resource Element (RE), which is defined as one OFDM subcarrier in FDM and one OFDM symbol in TDM from a single antenna port. In the FDM, 12 RE's are grouped in a PRB. The PRB defines the minimum granularity of the transmission in FDM, being 180 kHz for 15 kHz SCS, and it is doubled when SCS is increased, as shown in Table 3.1. Due to this the maximum channel bandwidth is given in number of PRBs as shown in Table 3.1, and the remaining part of the carrier is used as guard band. As a result, with higher SCS the frequency utilization is slightly lower in narrower operating bands.

As the PRB defines the minimum granularity in FDM, it is also the basic unit for scheduling resources in FDM just like in LTE. There were no obvious benefits in using a different PRB size, and commonality with LTE was an enabler for LTE and NR frequency sharing. Thus, the LTE choice was carried forward to NR, even though, e.g. 16‐subcarrier PRB was considered. Having frequency allocation in terms of PRBs of fixed number of subcarriers reduces the amount of signaling overhead compared to assigning subcarriers directly. Additionally, the same FDM scheduling implementation can be applied regardless of the used SCS. Furthermore, if multiple numerologies are used on a given carrier, there is always an integer number of lower SCS PRBs that match one higher SCS PRB, leading to a convenient nested PRB grid for all SCSs.

The TDM is organized in 10 ms frames, and slots of 14 symbols, with a varying number of slots in a frame depending on the used SCS as shown in Table 3.1. In addition, a 1 ms subframe is defined as in LTE but it has less meaning as all definitions are based on slot level operation.

For each slot length, the allowed slot structures have been defined in a flexible manner. However, all the different configurations follow three basic structures, which are downlink only slot, uplink only slot, and bi‐directional slot. The uplink only and downlink only slots self‐evidently contain only uplink or downlink symbols, which are used in FDD for UL and DL transmission, respectively. Additionally, uplink and downlink only slots can be used in TDD when longer transmission in either direction is desired by the network scheduler. Different bi‐directional slot variants are introduced by different allocation of physical channels in the flexible symbols.

The flexible symbol can be allocated to be either a DL or UL symbol containing different DL or UL channels and the corresponding Demodulation Reference Symbol (DMRS) depending on the network scheduler decision. Between DL and UL symbols in actual configuration single symbol is used as switching gap, which also includes the necessary guard period (GP). Figure 3.10 depicts a frame containing first a DL symbol, then flexible symbols 1–12, i.e. either used for DL or UL direction, and finally symbol 13 used for UL direction.

Image described by caption and surrounding text.

Figure 3.10 Bi‐directional slot with DL symbol, flexible symbols and UL symbol.

For all slot structures the same principles are used to enable fast receiver processing and to maximize commonalities between different slot types. The DL only slot and bi‐directional slot starts with a PDCCH portion, which can be from one to three symbols, followed by a DMRS, followed by the data symbols.

In downlink only slots, all symbols 2–13 are DL data, and the Physical Downlink Shared Channel (PDSCH) portion can also use REs before the DMRS, if unoccupied by PDCCH. Containing the DMRS early in the structure enables efficient pipelined processing as the UE or gNB receiver can prepare the channel estimate early during the reception of the slot. This allows the receiver to start demodulating symbols and decoding the code blocks prior to having received all the symbols in the slot. Compared to LTE this allows for significantly reduced processing time. Processing time is defined as the time required before the UE can complete decoding of the received transport block and report Acknowledgement (ACK) or Negative Acknowledgement (NACK) back to the gNB. In difficult channel conditions additional DMRS symbols can be added to the frame structure to update the channel estimation. This comes with additional DMRS overhead as well as additional receiver processing requirements as the channel estimation must be updated during a single slot based on the additional DMRS symbol(s) before demodulation of the slot can be fully completed.

The time‐domain resource allocation of PDSCH and Physical Uplink Shared Channel (PUSCH) and the placement of the scheduling PDCCH allows further flexibility to facilitate different slot structures and reduced scheduling and transmission latency. PDSCH allocations (including DMRS) can be of duration 2, 4, 7…14 symbols, and start at any DL symbol if the allocation does not span over the end of the 14‐symbol slot, introducing mini‐slots as discussed above. The PDCCH scheduling the PDSCH can be placed on any symbol in the slot, if the PDSCH scheduled by the PDCCH does not start earlier than the PDCCH. Similarly, in uplink the PUSCH allocation can be of any length, if the allocation does not span across the slot boundary (see [19]).

Such unlimited flexibility was designed to allow for constructing so‐called mini‐slot structures within slots for low latency traffic, as the data waiting for the start of the transmission as well as the data transmission duration take a shorter time. This comes with the cost of increased control and RS overhead. Thus, the basic scheduling of data can be expected to operate on a slot basis. With low‐latency UE data processing it is also possible to construct self‐contained slots, where the PDCCH and PDSCH are in the first part of the slot, the UE processes the data during the DL/UL switching gap and transmits the HARQ‐ACK on Physical Uplink Control Channel (PUCCH) in the end of the same slot.

3.2.4 Bandwidth and Carrier Aggregation

As discussed above, the NR system supports many system bandwidths with exceptionally wide system bandwidths as depicted in Table 3.1. In previous cellular systems, 2G, 3G, LTE, the UE RF (Radio Frequency) bandwidth for reception and transmission was equal to the system bandwidth of the cell, i.e. 5 MHz in 3G, or 20 MHz in LTE respectively. In LTE Rel‐8, also cell bandwidth narrower than 20 MHz were supported, which were enabled by network configuration. In such cases the UE RF bandwidth for RX and TX operation matched with the system bandwidth of the cell.

However, when the support for wider system bandwidth was increased significantly, mandating that UE RF bandwidth should always match the cell bandwidth for reception and transmission, was overwhelming and unnecessary. To allow UE to operate with narrower RX and TX bandwidth than the cell bandwidth, a definition of a bandwidth part (BWP) was introduced. The design of BWP allows UE specific control of each RF transmitter and receiver chain of the UE.

During initial access the UE utilizes the default BWP for receiving SSB and initializing connection via Random Access Channel (RACH). After the Radio Resource Control (RRC) connection is established, the gNB can configure the UE with UE specific set of BWP(s). In D and UL, a UE can be configured with up to four BWPs with a single BWP being active at a given time. Figure 3.11 illustrates the system bandwidth used by the gNB for transmitting and receiving data in the cell, the initial BWP for initial access, and separated dedicated BWP configured for two devices UE1 and UE2.

Image described by caption and surrounding text.

Figure 3.11 System bandwidth, initial BWP and configured BWP.

When a UE dedicated BWP is configured, the UE tunes its RF on the configured BWP, and therefore the UE is not expected to receive any physical channels or signals such as PDSCH, PDCCH, or Channel State Information Reference Signal (CSI‐RS) outside an active BWP.

The carrier aggregation is a feature to aggregate transmissions from multiple cells. It was introduced in LTE Release 10. In NR, as the cell bandwidth is decoupled from UE RF bandwidth, NR carrier aggregation aggregates different BWP configured for the UE in different cells. In each cell, when BWP is configured for carrier aggregation operation of a UE, the same PHY definitions apply.

3.2.5 Massive MIMO (Massive Multiple Input Multiple Output)

Massive Multiple Input Multiple Output (mMIMO) is one of the essential features of 5G NR access. The utilization of mMIMO technology has been considered in all the aspects of the NR radio access design, and thus it is supported in all physical channels in both UL and DL directions. The primary purposes of mMIMO are to enhance system coverage and capacity. The benefits of mMIMO are highly dependent on many factors such as the deployment scenario, carrier frequency, channel characteristics, and the antenna array configuration.

The mMIMO is the extension of traditional MIMO technology to antenna arrays having a very large number of controllable antenna elements (AEs) for transmitting and receiving radio signals. The term MIMO is used in a rather broad manner to include any transmission scheme involving multiple transmit antennas and multiple receive antennas, and the term “massive” is intended to mean a number much greater than eight (8) transmit and receive antennas in the base station. Typically, the UE operates with a significantly lower number of antennas as small device size and low power consumption are desired. The term “controllable” antennas refers to antennas whose signals are adapted/modified by the PHY via both gain and phase control for adapting the overall response of the antenna array. Therefore, in addition to actual mMIMO transmission, mMIMO relies on appropriate mechanisms for obtaining information regarding the channel so that control of the antennas is possible and optimized. Methods for obtaining the necessary control information can vary depending on the transmission direction, antenna technology, UE mobility, etc.

The basic principle of traditional MIMO is to utilize separated uncorrelated channels between the transmitter and receiver antennas so as to transmit different data streams on the same physical resources. When multiple streams are transmitted between the gNB and a single UE on the same time‐frequency resources, the term Single User Multiple Input Multiple Output (SU‐MIMO) is used. However, as the number of uncorrelated antennas and transmitter receiver units (TXRUs) is often limited in practical UE implementations to 2 or 4, the number of SU‐MIMO streams is typically limited to 2 or 4, in the downlink and to 2 or even single stream transmission in the uplink.

However, when the gNB has a higher number of TXRUs than the UE, the gNB capability can be utilized for multi‐user Multiple Input Multiple Output (MU‐MIMO). In MU‐MIMO, multiple data streams are transmitted over the same PHY resources to multiple users simultaneously. At a high level, the principle of MU‐MIMO is identical with SU‐MIMO, namely the transmission of multiple data streams on the same PHY resources over uncorrelated channels between multiple transmit and multiple receive antennas. In MU‐MIMO, as the distance between receiver antennas in different devices is typically much larger than the antennas on a single device, the channels in MU‐MIMO are even less uncorrelated receiver antennas than in typical SU‐MIMO transmission.

In SU‐MIMO, multiple streams are transmitted between a single UE and the gNB for increasing single user throughput. However, since the practical maximum number of streams that can be achieved in SU‐MIMO is limited to the number of antennas in the UE, the benefits of increasing the array size at the gNB tend to be limited unless MU‐MIMO is leveraged. As a result, the primary system capacity gains from massive MIMO are achieved by leveraging MU‐MIMO.

The coverage of the system can be improved by utilizing mMIMO with high gain adaptive beamforming, which focuses the transmitted energy toward the intended receiver. Coverage enhancement will be particularly important at higher carrier frequencies, where deployments tend to be coverage limited due to poor path loss conditions. Beamforming can also reduce the distribution of the interference seen in the system since the signals received at UEs other than the intended UE are typically combining non‐coherently, which acts to increase the overall signal‐to‐interference‐plus‐noise experienced by UEs in the system.

Severe coverage limited situations at higher carrier frequencies pose two main difficulties that can be overcome with mMIMO. The first problem is that a cell wide broadcast control channel (BCCH) may not be feasible since the maximum pathloss may be too high to achieve a reasonable cell radius especially in high frequencies in mmWave bands. Therefore, to increase the cell radius, a grid‐of‐beams‐based approach involving the sweeping of multiple narrow high‐gain beams is supported for downlink synchronization and BCCHs, as discussed in Sections 3.3.1 and 3.3.2. The second problem is that it is difficult to acquire channel knowledge on a per‐antenna‐element basis when the individual antenna elements are low gain with high beam‐width in severely path‐loss limited channels.

To overcome these problems, a grid‐of‐beams type of approach is an appropriate solution for data channels as well, and the system can be configured to acquire channel knowledge on a per‐scanned‐beam basis rather than on a per‐antenna basis. However, it is difficult to define precisely the conditions under which acquiring channel knowledge per antenna element is impractical and these limitations may and will change during different product generations when more advanced processing technologies are available.

Thus, one of the main design goals for NR mMIMO was to provide a framework that scales easily to handle any number of antenna elements and any of the gNB antenna array architectures depicted in Figures 3.123.14. Furthermore, NR‐mMIMO provides solutions where the UE can be agnostic to the gNB array configuration.

Image described by caption and surrounding text.

Figure 3.12 Digital baseband beamforming architecture, with K input streams and Q Transmitter‐Receiver units and antennas.

Image described by caption and surrounding text.

Figure 3.13 RF beamforming architecture, with B input streams with B Transmitter‐Receiver units and Q antennas.

Image described by caption and surrounding text.

Figure 3.14 Hybrid beamforming architecture, with B input streams with B Transmitter‐Receiver units and Q antennas.

Additionally, even though the number of antennas at an UE is expected to be significantly lower than the number at the gNB, similar antenna architecture considerations also apply to the UE design. Therefore, the NR mMIMO framework also supports a grid of beams strategy at the UE, which is mostly applicable for operation above 6 GHz. The number of RX and TX beams is left to UE implementation, but DL synchronization, uplink RACH as well as data and control channels transmission are compatible with UEs that have a hybrid beamforming architecture.

The mMIMO concept can be implemented with three different antenna array architectures, which are digital baseband beamforming, analog beamforming and hybrid beamforming. Each of these antenna array architectures have different characteristics and implications on system operation.

The digital baseband beamforming architecture shown in Figure 3.12 is the architecture typically deployed by LTE macro‐cells. In a digital baseband architecture, each antenna port is driven by a transceiver. The multi‐antenna methods operate at the baseband in the digital domain, i.e. this is a baseband MIMO architecture. Extensions to multi‐stream transmission and reception involve incorporating multiple receive and transmit weights in the baseband MIMO processing block, thus there is a full transceiver with Analog to Digital converter and Digital to Analog converter (ADC and DAC) behind every antenna. The digital baseband architectures are assumed for systems operating below 6 GHz frequency bands, where the deployments are expected to be mostly interference limited. The benefits of large‐scale arrays will be realized by using high order spatial multiplexing, with an increasing emphasis on MU‐MIMO as the base array size increases. Baseband architectures provide a high degree of flexibility such as frequency selective beamforming across Orthogonal Frequency‐Division Multiple Access (OFDMA) subcarriers.

However, the complexity of baseband architectures increases significantly when the number of transceivers increase, as well as when the system bandwidth increases. The use of wider bandwidths can improve system capacity but necessitates the use of very high‐speed ADC and DAC processors, which have significant power consumption requirements and increases the costs.

An alternative approach to the digital baseband beamforming architecture is the RF‐beamforming architecture, also called the analog architecture, where control of MIMO and beamforming is performed at RF level with analog components. Figure 3.13 shows the RF MIMO architecture where a single transceiver drives the antenna array, and the transmit array processing is performed with RF components having phase shifting and potentially gain adjustment capabilities as well.

In contrast to baseband architectures, frequency selective beamforming with an RF architecture is not feasible as the transmit weights are applied at RF across the entire signal bandwidth. Additionally, as beamforming weights are applied over the entire signal bandwidth, multiplexing different UEs across frequency would require the use of multiple beams operating simultaneously, each driven by a separate transceiver unit. As a result, with RF architectures operating with a single transceiver, multiplexing of UEs is typically performed in the TDM instead of the FDM. Furthermore, as transmit weights are applied by using analog components, sufficient beam switching time should be enabled between symbols using different transmission weights. Therefore, RF architectures are used mainly in millimeter frequencies, which are mainly pathloss rather than interference limited, and available bandwidths are relatively wide.

The hybrid beamforming architecture is an alternative to the full digital baseband and the RF beamforming architecture. The hybrid architecture tries to find a compromise between complexity and transmission flexibility. In hybrid beamforming control of MIMO and beamforming is split between RF and baseband. Figure 3.14 shows an example of the hybrid architecture, where multiple streams are beamformed at the RF in addition to the baseband MIMO processing. In the hybrid architecture, each RF beam is driven by a transceiver, and multi‐stream beam weighting is applied at the baseband to the inputs of the transceivers. Figure 3.14 shows a hybrid architecture for a “fully connected” array configuration, where the multiple RF beamforming weight vectors are applied in parallel to all antenna elements of the array. In contrast, the alternative hybrid architecture is a “sub‐array” configuration, where each RF weight vector is applied to a unique subset of the antenna elements. One advantage of the sub‐array configuration is the lack of summation devices behind the antenna elements that are needed to form multiple parallel beams in the fully connected configuration. However, the beams in the sub‐array configuration have reduced beamforming gain as not each TXRU is connected to all antennas. However, this reduction in the gain of the RF beams can be mitigated by the baseband MIMO operation. A hybrid architecture provides additional flexibility over an RF beamforming architecture as the baseband transmit portion can be adapted across the signal bandwidth to further optimize performance.

For NR mMIMO operation, several objectives and requirements were defined to meet the overall 5G system requirements:

  1. – Support for different beamforming architectures, digital, hybrid, and RF beamforming as discussed above for both gNB and UE.
  2. – Support for sector wide common channel transmission as in LTE as well as beam sweeping of the common control channels (CCCHs) to improve downlink coverage. The actual introduced schemes are discussed in Sections 3.3.1 and 3.3.2.
  3. – Scalability in terms of the number of antenna ports, number of transceiver units and number of antenna elements, especially at the gNB.
  4. – Support for UE operation with minimal assumptions on the MIMO operation at network side allowing network vendors to improve network implementation without requiring a new generation of UEs.
  5. – Support for user‐specific reference symbol designs while eliminating the use of common reference signals to enable network power savings when the number of UEs in the cell is low or when the cell is completely empty.

With these requirements, NR mMIMO transmissions can be divided into following different operational options.

The first option is to utilize precoding on CSI‐RS. This technique involves the use of beamformed pilot signals where the UE sends feedback for one or more beamformed reference signals. Two main classes of pre‐coded CSI‐RS techniques are possible.

The first is the use of cell‐specific beams with dynamic beam selection. As an example, the grid‐of‐beams concept involves the base station transmitting reference signals out of some number of fixed beams, and the UE can perform best‐beam selection and/or feedback CSI for one or more beams. The feedback can include Channel Quality Indicator (CQI), Rank Indicator (RI), and possibly codebook Precoding Matrix Indicator (PMI), corresponding to the beamformed channel measured by the UE.

The second is the use of UE‐specific beams created by leveraging reciprocity. An example of this approach is where UL signals from the UE are leveraged to determine one or more beams over which the CSI‐RS will be transmitted to the UE. The UE will feedback CSI for the one or more UE specific beams over which the CSI‐RS was transmitted.

The use of precoding on CSI‐RS, e.g. the grid‐of‐beams concept, is especially helpful in mmWave deployments where path loss limitations make it difficult to estimate the channel to each antenna port. Pre‐coded CSI‐RS techniques are appropriate for both baseband and hybrid array architectures and can support both SU‐MIMO and MU‐MIMO transmission. With baseband architectures, the precoding of the CSI‐RS for forming the cell‐specific or UE‐specific beams would be applied at baseband level. In contrast, with hybrid architectures, the CSI‐RS would be pre‐coded in the RF/analog domain, and the number of CSI‐RS ports would be limited by the number of transceivers in the array.

When precoding is not used on CSI‐RS, the mMIMO technique involves the transmission of CSI‐RS from all the transceiver units in the array and generally involve feedback from the UE. The feedback is obtained by PMI codebook signaling from the UE. NR defines two types of codebooks, a “standard resolution” codebook intended for SU‐MIMO operation and a “high resolution” codebook intended to provide accurate channel knowledge suitable for MU‐MIMO transmission. The non‐pre‐coded CSI‐RS techniques are generally intended for digital baseband architectures operating in scenarios that allow for the acquisition of channel knowledge on a per‐transceiver basis via CSI‐RS. This operating methodology is possible on below 6 GHz frequency bands in deployments which are not path loss limited. Therefore, techniques without pre‐coded CSI‐RS are not well suited for operation in the mmWave bands where poor path‐loss conditions make it difficult for a non‐pre‐coded reference signal to reach the cell edge.

In addition, the NR mMIMO framework leverages reciprocity of the propagation channel. Transmit weights for the downlink can be computed based on signals received on the uplink (and vice‐versa) by leveraging the uplink/downlink reciprocity in the RF multipath channel response between a base station and a UE. As is well known, the instantaneous overall space‐time‐frequency RF multipath channel is reciprocal in a TDD system, but other aspects of the channel are also reciprocal even in FDD systems, e.g. parameters such as the multipath directions of arrival/departure and times of arrival. Various aspects of the uplink channel can be used for computing downlink transmit weights, e.g. the uplink spatial covariance matrix, directions of arrival, best uplink beam, or even the complete uplink channel response (TDD). However, leveraging reciprocity has its challenges. For example, link adaptation can be challenging given the non‐reciprocal nature of the interference. Also, if UEs do not transmit with all antennas, then the full uplink matrix channel cannot be determined without antenna switching in the UE. Transmit power limitations in the UE may hinder the ability of the uplink signal to reach the base station with sufficient SNR unless antenna/beamforming gain is applied on one or both ends of the link, thereby making it difficult to acquire the full matrix uplink channel on a per‐antenna basis. Leveraging reciprocity also requires antenna array calibration to remove the influence of transceiver hardware variations between uplink and downlink.

3.2.6 Channel Coding

Channel coding is an essential component of any telecommunication system with imperfect channel environment. Channel encoding is performed in a transmitter to enable a receiver to detect and to recover bit errors caused by imperfect channel. The selection of the channel coding method is a compromise between obtained channel coding gain, calculation complexity, and processing delay. All service scenarios in NR will have channel coding as a necessary functionality, but the exact coding scheme may be optimized depending on the use case. However, significant changes or adopting completely different coding schemes could cause complex hardware implementations which consequently lead to expensive NR operation. In general, the selection of channel code(s) which are quite flexible to match and satisfy different NR requirements was considered from the beginning of NR discussions in 3GPP. The selection of the channel coding scheme was mainly influenced by the requirements of the eMBB use case, given that hardware should be dimensioned mainly considering the case that supports very high data rate requirements. The other usage scenarios may preferably use the same channel coding scheme, and different schemes should only be introduced if compelling benefits are identified.

General requirements of eMBB scenarios in NR have a broader definition. Most of these requirements can be simplified into a smaller group of requirements. Performance of the coding scheme, implementation complexity, the latency of encoding and decoding, and flexibility (e.g. variable code length, code rate, HARQ) are identified as requirements that a code should satisfy. Turbo, LDPC and polar codes are the most promising candidates, which were identified in 3GPP for the eMBB data channel because they are capacity approaching codes.

Turbo coding is the existing coding scheme in LTE and capable of handling existing broadband scenarios. LTE turbo transmitter and receiver chain including interleaving, rate matching, and HARQ have also well developed over time. In LTE, turbo code also supports a wider range of block sizes. Their performances in lower code rates are competitive with many other coding candidates. However, turbo coding has limitations when achieving low decoding latencies due to its interleaving/deinterleaving stages and iterative decoding. Also, considering excessive energy consumptions and chip area of existing LTE turbo decoders, a substantial increase in energy consumption and chip area are expected with turbo codes when the data rates increase up to multi‐gigabit range. Therefore, the implementation aspects, considering area‐efficiency, i.e. encoded/decoded throughput per given chip area (Gbps mm−2), and energy‐efficiency, joules per bit in encoding/decoding (pJ/bit), played a significant role when deciding the coding candidate for NR. For example, to support 20 Gbps throughput with 1 W baseband power at the UE would require energy efficiencies around 50 pJ/bit, which is not possible with available turbo decoder implementations.

Polar coding is very promising in terms of theoretical performances. It was introduced with a simple decoding scheme, successive‐cancellation (SC) decoding, which achieves high capacity when the block sizes are very large. However, such block sizes will not be used in NR, and the performance is lower for short to moderate block sizes. New algorithms have been proposed to improve the performance of polar codes for short to moderate block sizes by sacrificing its low complex decoding capability. For example, List‐32 Cyclic Redundancy Check (CRC) assisted decoding performs better compared to available LDPC designs when the block sizes are less than 2000 bits. Incremental redundancy (IR) HARQ and implementation concerns when supporting high throughputs were identified as possible concerns for polar codes.

LDPC codes are the most common scheme used outside 3GPP and provide very good performance over a wider range of block sizes. This is also capable of achieving the performance close to the Shannon limit mostly for long block codes. In general, LDPC has superior performance when the code rates are closer to one. The flexibility of the implementation is one other key benefit of LDPC. For example, the latency associated with LDPC decoding is low due to parallelizable architecture. Like Turbo, LDPC codes are mature as they are already used in many other standards.

As implementation aspects are very important in NR eMBB data channel coding scheme, it is good to understand the capabilities of different codes. Tables 3.2 and 3.3 provide recent implementations considered for the Turbo, LDPC, and Polar codes.

Table 3.2 Implementation for single code rate and block size.

Coding scheme LDPC Turbo Polar
Reference [20] [21] [22] [23] [24] [25] [26] [27] [28]
Technology (nm) 65 65 65 65 45 65 90 65 40
Decoding algorithm Split threshold min‐sum Offset min‐sum Split threshold min‐sum Partial parallel
Sum‐Product
Max‐log‐ MAP Max‐log‐ MAP SC BP Fast SSC
Code length 2048 2048 2048 672 6144 6144 1024 1024 1024
Code rate 0.84 0.84 0.84 0.8125 0.75 0.5 0.5 0.5
Clock (MHz) 195 700 100 185 40 500 1000 410 2.79 300 50 248
Chip area (mm2) 4.84 5.35 5.10 0.16 11.1 109 3.21 1.48
Throughput (Gbps) 92.8 47.7 6.7 85.7 18.4 5.6 3.7 15.8 2.9 4.7 0.77 254.1
Area‐efficiency (Gbps mm−2) 19.1 8.9 1.2 16.8 3.6 35 0.34 0.145 0.89 3.17 0.5
Energy‐efficiency (pJ/bit) 15 58.7 21.5 13.6 3.9 17.65 2105 608 11.45 102.1 23.8
Maximum latency (ns) 56.4 137 960 81 375 358 1470

Table 3.3 Implementations for multiple code rates and block sizes.

Coding scheme LDPC Turbo
Reference [29] [30] [31] [32] [33] [34]
Technology (nm) 90 28 65 65 65 45
Decoding algorithm Stochastic Min‐sum New Partial layered BP Max‐log‐map Max‐log‐map
Code lengths (standard) 672
(802.15.3c)
672 (802.11ad) 672 (802.11ad) 2304 All block sizes in LTE All block sizes in LTE
Code rates 1/2, 5/8, 3/4, 7/8 1/2, 5/8, 3/4, 13/16 1/2, 5/8, 3/4, 13/16 1/2–1 All code rates All code rates
Clock(MHz) 768 260 400 1100 410 600
Chip area (mm2) 2.67 0.63 0.575 1.96 2.46 2.004
Throughput (Gbps) 7.9 12 9.25 1.28 1.01 1.67
Area‐efficiency (Gbps mm−2) 2.97 19 16.08 0.65 0.41 0.83
Energy‐efficiency (pJ/bit) 55.2 30 29.4 709 1870 520

In 3GPP discussions, it was understood that LDPC codes with limited flexibility provide the most attractive area and energy efficiency, and that the characteristics of LDPC codes in area and energy efficiency remain advantageous even when supporting full flexibility. For decoding hardware that can achieve acceptable latency, performance and flexibility, there are some concerns about the area efficiency and energy efficiency that are achievable with polar codes. Turbo codes are widely implemented in commercial hardware, supporting HARQ and flexibility comparable to NR requirements, but not at the high data rates or low latency as required for NR.

In addition, highly‐parallelized LDPC decoders can help to reduce latency. Some concerns exist regarding Turbo and polar decoders as they incur longer latency than LDPC decoders. Moreover, it is understood that LDPC, Polar and Turbo codes can all deliver acceptable flexibility. Chase‐combining (CC) and Incremental Redundancy‐Hybrid Automatic Repeat Request (IR‐HARQ) support is a concern that arose when discussing the polar code for eMBB data channel. On the other hand, LDPC schemes for support of both CC‐ and IR‐HARQ and the ability of Turbo codes to support both CC‐ and IR‐HARQ was well known.

Considering most of the above aspects, 3GPP decided to adopt LDPC codes for the NR eMBB data channel (see [35]).

Use of the same code for other use cases of NR is hardware efficient, if there are no real benefits of a different coding scheme. URLLC is the next scenario that has different requirements than eMBB. When it comes to URLLC coding, the most important requirements are support for low latency and very high reliability in encoding and decoding. This demands the channel coding scheme to have low latency in the encoding/decoding process and extremely low error floors. Low encoding/decoding often can be achieved by adopting small to moderate block length. In consequence, the system will work far away from the Shannon limit stated for very long codes. In the LDPC design details, these requirements were considered and URLLC will use the same coding scheme as eMBB.

Massive MTC requirements are quite different from the eMBB usage scenario. The key requirements for the mMTC use case are mainly to design low complex and low‐cost solutions, which could operate for years while serving smaller throughput requirements. For many mMTC scenarios, the device might operate only with battery power and be required to communicate over a long period. Moreover, the cost of the device should be lower to deploy in massive numbers. Most capacity approaching coding schemes, e.g. turbo, LDPC, and polar perform well when the block‐length is larger. When block sizes are small their performances are not significantly better compared to simple coding schemes like convolutional codes. Considering decoder complexities associated with turbo, LDPC, and polar codes, it is likely that other codes must be considered for mMTC in future 3GPP releases.

3.2.6.1 Channel Coding for User Plane Data

The LDPC code adopted in NR is flexible and different from the LDPC codes standardized before. Moreover, good flexibility of the supported block sizes and code rates should be supported to handle the wider range of traffic requirements in eMBB. IR‐HARQ support is not available in earlier LDPC standards, and 3GPP has taken major steps forward when optimizing LDPC codes for the eMBB data channel. In summary, NR LDPC design construction is supporting 1‐bit granularity of block sizes, IR‐HARQ support, and higher to lower code rate support by utilizing two base graphs. In the following section, we provide a quick overview of basic code construction details and coding chain, starting with basic LDPC operation.

An LDPC code is often defined by its M × N parity‐check matrix H. The M rows in H specify the M constraints in the code. Different codes have different parity check matrices. For example, a parity matrix can be illustrated as below.

The N columns in H correspond to the total number of code bits within a code word. There are two types of LDPC codes, called regular and irregular LDPC codes. The regular version code has exactly wc ones (bit 1) per column (column weight) and exactly wr = wc × (N/M) ones per row (row weight), where wc and wr are both small compared to N. Each parity‐check equation involves exactly wr bits, and every bit in a code word is associated in exactly wc parity check equations. In irregular LDPC codes, the number of ones per column or row is not a constant. Such irregular LDPC codes can perform better compared to the regular LDPC codes with similar dimensions.

The codeword x is constructed such that Hx = 0 (mod 2). At the encoder side, generator matrix is required to encode the info bits. The codeword x can be written as info and parity check parts.

(3.2)equation

The parity check matrix can be divided into two parts as

(3.3)equation

Matrix multiplication gives the following

(3.4)equation

When the matrix B is non‐singular, parity bits p can be derived as

In Eq. (3.5) the generator matrix G can be identified as B−1A. At the decoder, LDPC use message passing algorithms and can be understood by the representation of a Tanner graph. Any LDPC code can be illustrated by a Tanner graph, as shown in Figure 3.15.

Image described by caption and surrounding text.

Figure 3.15 Tanner graph for parity check matrix in Eq. (3.1).

For LDPC codes, the Tanner graph can represent the parity check matrix with two nodes, known as check and variable nodes. In Figure 3.15, check nodes are illustrated with squares C1–C3 and bit nodes are shown with circles V1–V6. There are M check nodes (three in the example), and N variable nodes (six in the example) which correspond to the number of rows and columns in matrix H. The check nodes are connected to the variable nodes based on the ones in matrix H. The branches between nodes are considered in the message passing algorithms such that iterative computation of probabilistic quantities is possible. In the LDPC decoding process, likelihoods obtained from soft‐decision components of a received vector r initialize the variable nodes and iteratively calculate relevant probabilistic values such that decoding of bits improve with the number of iterations.

The NR LDPC code design is based on quasi‐cyclic low‐density parity check (QC‐LDPC), which has low complexity encoding/decoding compared to other variants. The parity‐check matrix of a QC‐LDPC is given as an array of sparse circulants of the same size. The circulant size, or the shift size, determines the overall complexity of the implementation together with the dimensions of the parity‐check matrix. In NR LDPC design, two base graphs are introduced such that the code provides good performance at a broader range of block sizes and code rates and improves the latency and performance for lower block sizes and code rates. Parameters of LDPC designs are summarized in Table 3.4.

Table 3.4 NR LDPC base graphs.

Base graph Maximum block size Max code rate Min code rate
LDPC lifting size
Min (Zmin) Max (Zmax)
BG 1 8448 8/9 1/3 2 384
BG 2 3840 2/3 1/5 2 384

The representation of QC‐LDPC base graph H can be represented as follows

(3.6)equation

where Pi, j is a cyclic‐permutation matrix obtained from the zero matrix and the z by z cyclically shifted identity matrix to the right. Also, Pi, j is often represented as a numerical entry which is the value of the shift. All non‐zero entries of H define the connections between check and variables nodes. This is generally known as the base graph. Two base graphs in NR have the following structure as show in Figure 3.16.

A box divided into 5 regions labeled A, B, C, D, and E with double-headed arrows (Kb and N - Kb) depicting dimensions of LDPC base graphs.

Figure 3.16 Dimensions of LDPC base graphs.

For BG #1, N = 68 and Kb = 22, while BG#2 has N = 52 and Kb depends on the supported block size. Matrix A corresponds to systematic bits, matrix B is square and corresponds to parity bits, has a dual diagonal structure (i.e. main diagonal and off‐diagonal), matrix C is a zero matrix, matrix D corresponding to systematic and parity bits, and matrix E is an identity matrix.

Each of the base graphs can have eight lifting coefficient designs sets as in Table 3.5, where values of Pi, j can be different for the same i and j. Additionally, the maximum shift size of each shift coefficient design can be adjusted such that different code block sizes are supported with NR LDPC codes.

Table 3.5 Sets of LDPC lifting size.

Set number Set of lifting sizes (Z)
1 {2, 4, 8, 16, 32, 64, 128, 256}
2 {3, 6, 12, 24, 48, 96, 192, 384}
3 {5, 10, 20, 40, 80, 160, 320}
4 {7, 14, 28, 56, 112, 224}
5 {9, 18, 36, 72, 144, 288}
6 {11, 22, 44, 88, 176, 352}
7 {13, 26, 52, 104, 208}
8 {15, 30, 60, 120, 240}

BG#1 and BG#2 have different operating region unlike in LTE turbo, which was used across all transport block sizes (TBSs) and code rates (R). The BG #2 is used when the TBS ≤292 for all code rates, or TBS ≤3824 and R ≤ 2/3,or R ≤ 1/4, otherwise LDPC BG #1 is used.

The NR LDPC utilizes coding chain steps like LTE as shown in Figure 3.17, however at each step there are differences.

Diagram of coding chain for LDPC from CRC attachment to LDPC encoding, to rate matching, to bit-interleaving.

Figure 3.17 Coding chain for LDPC.

The procedure starts with (CRC) attachment that can have one or two levels. First, CRC is attached to the transport block. Next CRC is appended on code block, if the code segmentation provides more than one code block. In NR, 16‐bit CRC is used when the TBS is lower than 3824, whereas 24‐bit CRC is used for all other TBS. Moreover, 24 CRC is appended per code block when there is more than one code block.

In the second step LDPC encoding is performed using the parity check matrices described in an earlier section. This has several other steps like selecting the lifting size, zero padding, encoding, and removal of the padding bits. The selection of the lifting size also depends on the base graph. For BG #1, Kb = 22, while Kb for BG#2 is decided per supported block size.

After channel coding, rate matching in NR is based on the circular buffer as in LTE. After encoding, coded bits are copied to the circular buffer without the first 2*Z bits. Redundancy versions (RVs) are defined as RV0, RV1, RV2, and RV3, where they are not uniformly separated like in LTE. For BG #1, starting positions of RVs are 0, 17/66, 33/66, and 56/66 in fractions of the circular buffer. For BG #2, they are 0, 13/50, 25/50, and 43/50 in portions.

Finally, bit‐interleaving process is performed in channel coding, which is quite like the bit‐interleaver LTE. In principle, both have block interleavers, and writing is row‐wise left to right, reading from column‐wise top to bottom. The only difference is that the number of rows in the block interleaver is defined by the modulation order used for the transmission.

After completing the coding chain, a code concatenation is done and the output bitstream is send to the modulation mapper.

3.2.6.2 Channel Coding for Physical Control Channels

Polar coding is adopted as the coding scheme for both DL and UL control channels except for the very small block lengths considering the performance benefits observed with list decoding of polar codes (see [35]) For very short block sizes, LTE block codes are adopted in 3GPP. In particular, 1 bit, 2 bits, and 3–11 bits of payloads should be supported with repetition, simplex, and LTE RM codes. Polar coding is new to the standard bodies as it was invented in 2009.

Polar code is a channel coding scheme to approach communication channel capacity, and with the help of list decoders, polar codes have comparable and sometimes even better performance compared to the state‐of‐the‐art codes like LDPC and turbo codes. Also, decoding complexity of polar codes also shows a lower number such as O(L * N * log2(N)), where N is the encoded block length and L is the list size. These features made polar codes more attractive to control channels and polar was adopted as the main coding scheme in both downlink and uplink control.

Polar codes use the concept of polarization for error correction. The basic building block in polar codes is shown in Figure 3.18.

Diagram of basic building block of polar codes, with rightward arrows (input bits) labeled u1 and u2 pointing to 2 boxes labeled W. Rightward arrows, y1 and y2, from the boxes depict output/encoded bits of the encoder.

Figure 3.18 Basic building block of polar codes.

In Figure 3.18, u1 and u2 refer to the input bits, and y1 and y2 refer to the output/encoded bits of the encoder. In the information theoretic view, the mutual information I(U1; Y1, Y2) decreases compared to the pre‐polarized pair, I(U1; Y1), while I(U2; Y1, Y2, U1) increases compared to I(U2; Y2). In this way, one channel is degraded, and the other one is upgraded.

By duplicating and stacking the basic blocks, longer polar codes can be constructed. Figure 3.19 shows an example by a length‐4 polar code.

Encoding graph of length-4 polar codes, with boxes for W, W2, W4, and R4 and rightward arrows for u1, u2, u3, and u4, v1, v2, v3, v4, x1, x2, x3, x4, y1, y2, y3, and y4.

Figure 3.19 Encoding graph of length‐4 polar codes.

When the number of blocks grow up, the polarization effect becomes visible, and when the block size is very large, some channels would have zero capacity and others would become error‐free. This phenomenon is used in the data transmission, where the error‐free channels can be used to transmit information bits and force the value of the bits transmitted in the zero‐capacity channels to be some known value, e.g. 0, which are also called frozen bits.

In data transmissions, every channel is normally given a polarization weight or ranking, and best channels out of the total N polarized channels are used to transmit the data bits. As is visible in the stacking, polar codewords are always provided by the power of two as the output bits. However, rate matching schemes can be still applied without loss of significant performance. At the receiver side, polar coding can use many decoding algorithms just like many other error control coding schemes. SC list decoding has good performance with algorithmic complexity. In NR, most of the evaluations and design considerations were considered based on the assumption that SC list decoding is done at the receiver side.

NR control channels use concatenated polar coding schemes with CRC and parity check bits for Uplink Control Information (UCI), which is understood to give much better performance compared to traditional polar coding design. Two different designs were introduced for NR. The first design is for Downlink Control Information (DCI) transmitted in PDCCH, which uses distributed CRC polar code, with maximum polar codeword of 512 bits.

For uplink control the maximum polar codeword is 1024 bits. The uplink design uses parity and CRC concatenated polar code for UCI of 12–19 bits. Finally, CRC concatenated polar code is used for UCI message above 19 bits.

As DL control channel in NR is associated with blind decoding at the UEs, an optimized coding scheme is required to save UE energy and reduce latency. This is achieved by distribution of CRC bits inside the information bits. Overall, 8 CRC bits are distributed, and 16 bits are appended at the end. A nested interleaver is used to support any code block size, and a benefit of the distributed CRC polar code is early termination capability at the decoder. This saves UE energy consumption and reduces decoding latency. In addition, this supports the flexible decoding operation, where CRC bits can be used as error correction or error detection by a conventional CRC detector.

Finally, the design reduces false alarm rate (FAR). The rate that incorrectly received messages are decoded as correct, FAR, is reduced to obtain targets as low as 2–21 by careful selection of the distribution pattern.

For the UL control channel, CRC bits are attached at the end of the information payload, and CRC lengths depend on the payload size. When the payload is between 12 and 19 bits, 6 CRC bits are appended with three parity check bits. Above 19 bits of information payload, 11 CRC bits are appended.

At the input of the encoder, these concatenated bits are mapped to the most reliable ones (from the ranked positions), and the remaining positions are set to zero. This ranking order is known as the polar sequence, which is a nested pattern supporting up‐to 1024 bits long polar codeword.

Basic steps of the coding chain for control channels in NR are shown in Figure 3.20. As in the user plane coding chain, the coding chain for control channels starts with CRC attachment. As described before, the parity check bits are also applied for UCI when the payload is between 12 and 19 bits.

Diagram of coding chain of the NR polar coding from CRC distribution/attachment, parity bit placement to polar encoding, to rate matching, to bit-interleaving.

Figure 3.20 Coding chain of the NR polar coding.

The polar encoding is done based on the encoding mechanism as discussed above. To ensure that the polar codeword size is selected efficiently, the selection of the codeword size depends on the information payload size, available resources for the control channel and the minimum threshold code rate such as 1/8. As highlighted before, the maximum codeword size for DCI encoding is 512 and for UCI 1024.

After polar encoding, rate matching is performed. In NR PDCCH transmission, rate matching takes several steps. First, sub‐block interleaving is performed on the output bit stream of the polar encoded bits. In the second phase, bit stream bits are copied to the circular buffer. Finally, bit selection is done depending on the code rate supported in the control channel. If the rate matching output bits are larger than the polar codeword size, repetition is applied. Otherwise, puncturing and shortening are used depending on the code rate. When the code rate is lower than or equal to 7/16, puncturing is used, whereas shortening is used for other rates.

During the puncturing process the transmitter removes set of bits from coded bits to be transmitted in a way that the non‐transmitted bits can be unknown to the receiver and the corresponding Log‐Likelihood Ratio (LLR) of the bits can be set to zero.

During the shortening process the transmitter is setting input bits to a known value, and non‐transmission of coded bits corresponding to those input bits, such that the corresponding LLRs can be set to a large value at the receiver.

Finally, for UL control channel transmission bit‐interleaver is applied, however this step is not utilized in DL direction. The interleaver is known as the triangular interleaver, where the write‐read operation is like the traditional block interleaving.

3.3 Downlink Physical Layer

In the design of downlink PHY, different beamforming architectures, discussed in 3.2.5, were considered for each downlink channel. Different architectures at gNB and UE side imposed several requirements for the design. In addition, the design was targeted to be independent of the used SCS and the frequency band of the cell.

3.3.1 Synchronization and Cell Detection

For initial cell search, downlink synchronization and cell level radio resource management (RRM) functions, a downlink synchronization signal block, called an SS Block, was defined. The SS Block comprises of primary synchronization signal (PSS), secondary synchronization signal (SSS) and PBCH, together with a demodulation reference signal (DMRS). The PSS and SSS are used for time and frequency synchronization acquisition and physical cell identity (PCI) determination. PBCH is used to broadcast the most essential system information of the cell.

The PSS and SSS occupy both one OFDM symbol in TDM, and PBCH occupies two OFDM symbols. PSS, SSS, and PBCH are multiplexed in time division manner as shown in Figure 3.21.

SS Block structure depicted by 4 vertical adjacent bars, with the 1st bar starting the left labeled PSS and the 2nd and 4th bar labeled PBCH. The 3rd bar are divided into 3 labeled PBCH, SSS, and PBCH (top-bottom).

Figure 3.21 SS Block structure.

In total, the SS Block occupies 4 OFDM symbols in TDM and 20 PRBs in the FDM. PRBs around PSS and within SS Block bandwidth allocation are left unused to allow transmitting power boosting for the PSS.

The structure is the same for below 6 GHz and above 6 GHz carrier frequency ranges and for different numerologies. The SS Block can be transmitted using 15 or 30 kHz SCS at below 6 GHz frequency bands and using 120 or 240 kHz SCS at above 6 GHz carrier frequency bands. Different allowed SCS options are depicted in Table 3.6.

Table 3.6 SS Block subcarrier spacings in given bands.

SS block SCS (kHz) NR operating band
15 n1, n2, n3, n7, n8, n20, n28, n38, n41, n50, n51, n70, n71, n74, n75, n76
15 or 30 n5, n66
30 n77, n78, n79
120 n258
120 or 240 n257, n260

The SS Block used for initial cell search can be located also elsewhere than in the middle of the system bandwidth in FDM if PSS is in the predefined synchronization signal raster which can be sparser than the channel raster.

In addition, the gNB may configure an SS Block which is not located on the synchronization signal raster in FDM for measurements. Such configuration requires UE dedicated signaling to point to the location of the SS Block.

3.3.1.1 Primary Synchronization Signal (PSS)

When the UE is performing cell search it searches first for the PSS. PSS is in a predefined synchronization signal raster in FDM. PSS is used for initial symbol boundary and coarse frequency synchronization to the NR cell. It is based on CP‐OFDM waveform for below 52.6 GHz frequencies. For initial access, the UE can assume a specific SCS applied for the PSS in a given frequency band. This is defined in 3GPP specifications as depicted in Table 3.6, however, for frequency bands having two options, the UE needs to perform blind search with both options in initial cell search.

There are three PSS sequences like in LTE. Instead of using Zadoff‐Chu sequences, NR has adopted an FDM‐based BPSK m‐sequence. M‐sequence was selected because it has not a time and frequency offset ambiguity, present with Zadoff‐Chu sequences. Ambiguity function plots of (a) LTE PSS and (b) NR PSS are presented in Figure 3.22. The LTE PSS sequence length is 62 and NR PSS sequence length is 127. Detection performance under initial frequency offset due to oscillator synchronization mismatch is improved and UE complexity is reduced since the UE does not need to try with that many different PSS hypothesis in NR SSS detection as would be the case with Zadoff‐Chu sequence‐based LTE PSS. Correspondingly, joint PSS and SSS detection performance is improved in NR compared to LTE.

Image described by caption and surrounding text.

Figure 3.22 PSS time and frequency offset ambiguity of (a) LTE PSS sequence (left) and (b) NR PSS sequence (right).

An important design criterion was to improve one‐shot PSS detection performance in NR compared to LTE. In addition, the selection of m‐sequence provided better signal characteristics under time offset and frequency offset ambiguity. As a result, length‐127 m‐sequence was adopted to provide 3 dB larger processing gain and higher frequency diversity with the cost of increased UE complexity, since bandwidth and sampling rate would be doubled from that in LTE.

3.3.1.2 Secondary Synchronization Signal (SSS)

After a UE has detected the New Radio‐Primary Synchronization Signal (NR‐PSS) and acquired symbol timing and initial frequency synchronization, it tries to detect SSS which carries the PCI. Since the SSS is located on the same frequency location as NR‐PSS and one OFDM symbol apart, the UE may perform either non‐coherent or coherent detection using channel estimates based on NR‐PSS.

SSS is a Gold sequence of length 127. There is one polynomial with 112 cyclic shifts and another polynomial with 9 cyclic shifts forming together 1008 different PCIs. Index of the detected NR‐PSS sequence (0, 1, or 2) is used in the generation of nine cyclic shifts for the second polynomial. Gold sequence dSSS(n) is depicted in Eq. (3.7), where m0 and m1 are cyclic shifts for the first and second polynomial, images has values 0, 1, …, 335 and images has values 0, 1, 2 corresponding to the index carried by the NR‐PSS.

where 0≤n<127 and

(3.8)equation

and

(3.9)equation

This design provides enhancements compared to LTE SSS. Doubling the sequence length brings 3 dB processing gain enhancement and adopting long sequences instead of using two shorter sequences in interleaved manner as in LTE improves cell detection reliability especially at the cell edge. In LTE due to the use of two shorter sequences to deliver the cell ID, the cell‐ambiguity issue may arise especially for UEs at the cell edge because performing piece‐wise maximum‐likelihood detection of each sub m‐sequence will result in performance loss due to smaller spreading gain from the shorter sequence.

The improved overall design compared to LTE is illustrated in Figures 3.23 and 3.24. Figure 3.23 present performance when 5 ms transmission periodicity for PSS/SSS is used for both LTE and NR. Figure 3.24 depicts the performance when LTE has 5 ms and NR 20 ms PSS/SS transmission periodicity. Clearly, improved one‐shot detection performance in NR allows using lower SS Block transmission periodicities without scarifying performance, while at the same time enables better energy savings in the network.

Graph depicting the detection latency for LTE and NR for 5 ms PSS/SSS transmission periodicity, with 6 ascending curves for NR, CFO = 0 Hz; 3.75 kHz; 7.5 kHz and LTE, CFO = 0 Hz; 3.75 kHz; and 7.5 kHz.

Figure 3.23 Detection latency for LTE and NR for 5 ms PSS/SSS transmission periodicity.

Graph depicting the detection latency for LTE and NR when LTE is having 5 ms and NR 20 ms PSS/SSS transmission periodicity, with 6 ascending curves for NR, CFO = 0 Hz; 3.75 kHz; 7.5 kHz and LTE, CFO = 0 Hz; 3.75 kHz; and 7.5 kHz.

Figure 3.24 Detection latency for LTE and NR when LTE is having 5 ms and NR 20 ms PSS/SSS transmission periodicity.

3.3.1.3 Physical Broadcast Channel (PBCH)

PBCH, a part of the SS Block, is for signaling most essential system information, and shown in Table 3.7. The information includes timing info based on System Frame Number (SFN), half frame indicator and Most Significant Bits (MSBs) of the SS Block position within a half‐frame, and information how to receive remaining minimum system information (RMSI). Parameters for receiving RMSI provide the UE with information about time and frequency resources of control resource set (CORESET) and monitoring parameters like periodicity and window duration for detecting PDCCH that schedules PDSCH carrying the actual RMSI data. The transmission of the PBCH is based on a single antenna port transmission using the same antenna port as PSS and SSS within an SS Block. While FDM precoder cycling is precluded, the gNB may use TDM precoder cycling by changing the precoder from one PBCH transmission to another.

Table 3.7 PBCH content.

Parameter Number of bits Comment
SFN 10 Indicates system frame number
Half‐frame timing 1 Indicates first or second slot of the frame
SS block location index (3 MSBs at above 6 GHz) 3 Reserved at below 6 GHz
Reserved for higher layer signaling 3
Offset between SS block frequency domain location and PRB grid in subcarrier level 5/4 5 bits for below 6 GHz and 4 bits above 6 GHz
DL numerology to be used for RMSI, Msg 2/4 for initial access and broadcasted OSI 1 15 or 30 kHz for below 6 GHz; 60 or 120 kHz above 6 GHz
Spare 0/1
CRC 24 Polar code with 24‐bit CRC
Total 56

DMRS of PBCH is mapped on every PBCH symbol with equal FDM density in all PRBs. In addition, PCID‐based FDM shift is used to map DMRS on REs in each PRB as depicted in Figure 3.25.

DMRS mapping on REs in a PRB based on Physical Cell ID depicted by 4 columns for 12 boxes for V = 0, V = 1, V = 2, and V = 3 with unfilled and shaded boxes representing DMRS RE and data RE, respectively.

Figure 3.25 DMRS mapping on REs in an PRB based on Physical Cell ID.

DMRS sequence is based on PCID and N LSBs of the SS Block index. In case of carrier frequency range, the system is deployed for below 3 GHz is N = 2, otherwise N = 3.

3.3.1.4 SS Block Burst Set

To support flexible resource allocation and beamforming for the SS Block, multiple SS Blocks can be transmitted by the gNB within a certain period. The set of SS Blocks is called an SS Block burst set. Within one burst set the gNB transmits the SS Blocks throughout the whole sector thus enabling narrower transmit antenna radiation pattern with higher beamforming gain than with a sector wide radiation pattern. There are a number of fixed TDM locations within a 5 ms half frame defined where the SS Block can be transmitted. The number depends on the carrier frequency range. At below 3 GHz there are four locations available, between 3 and 6 GHz there are eight locations available as shown in Figure 3.26. At above 6 GHz there are up to 64 locations available for the SS Block transmissions per SS Block burst set as shown in Figure 3.27. That means, that at below 3 GHz the gNB may transmit SS Block throughout the sector using four different transmit beams while at above 6 GHz the gNB may use up to 64 transmit beams.

Diagram illustrating SS Block with 15 kHz SCS (top), SS Block with 30 kHz SCS (alt.1) (middle), and SS Block with 30 kHz SCS (alt.2) (bottom).

Figure 3.26 SS Block positions within a slot as a function of SS Block subcarrier spacing below 6 GHz.

Diagram illustrating SS Block with 120 kHz SCS (top) and SS Block with 240 kHz SCS (bottom).

Figure 3.27 SS Block positions within a slot as a function of SS Block subcarrier spacing above 6 GHz.

The fixed TDM locations of the SS Blocks of the SS Block burst set within a 5 ms half frame in the frame structure depend on the applied SCS for the SS Block. At below 6 GHz the block can be transmitted either using 15 or 30 kHz SCS, as shown in Table 3.6, and at above 6 GHz either using 120 or 240 kHz SCS.

With 15, 30, and 120 kHz SCS there are two SS Block positions within a slot of 14 symbols and with 240 kHz SCS there are four positions within a slot of 28 symbols as shown in Figures 3.26 and 3.27.

3.3.2 System Information Broadcast (SIB)

System information broadcast is divided into minimum system information (MSI) and other system information (OSI). MSI provides the UE information needed to access a cell. Most essential MSI parameters are conveyed in PBCH of the SS Block and the RMSI is carried in separate transmissions using PDSCH scheduled via PDCCH. For RMSI delivery, PBCH essentially provides CORESET configuration and monitoring parameters for the UE to be able to monitor PDCCH for RMSI scheduling. PBCH content, except the SS Block location index, is the same for all Synchronization Signal (SS) Blocks within an SS Block burst set for the same center frequency. Payload contents of the PBCH is illustrated in Table 3.7.

3.3.2.1 Remaining Minimum System Information (RMSI)

Three different multiplexing pattern options have been defined to multiplex PDCCH transmission of the RMSI CORESET and PDSCH for data within delivery of the SS Blocks. The first option is TDM multiplexing between SS Block and PDCCH RMSI CORESET, and PDSCH for RMSI delivery in separated slots. The time difference between SS Block transmission and RMSI transmission is not fixed and they can have different transmission periods. All signals are confined within a bandwidth defined by the RMSI CORESET, which is the initial active DL BWP, as shows in Figure 3.28. This option is supported both at below and above 6 GHz carrier frequency ranges.

Schematic displaying 2 adjacent boxes labeled PDCCH RMSI CORESET and PDSCH for RMSI with a vertical two-headed arrow labeled Initial Active DL Bandwidth Part at the right side and a box labeled SS Block at the far left.

Figure 3.28 Time multiplexing of SS Block and RMSI transmission.

The second transmission option is the TDM multiplexing between SS Block and RMSI CORESET, and FDM multiplexing between SS Block and PDSCH for RMSI delivery where the signals are confined within a bandwidth supported by all UEs. There can be unused PRB(s) between SS Block PRBs and PRB used for PDSCH transmission, as shown in Figure 3.29, however this increases the needed minimum UE bandwidth. The SS Block and PDCCH RMSI CORESET are not overlapping in the frequency. This option is supported only at above 6 GHz.

Schematic displaying 2 adjacent boxes labeled PDCCH RMSI CORESET and PDSCH for RMSI with a vertical two-headed arrow labeled Initial Active DL Bandwidth Part at the right side and a box labeled SS Block at the top.

Figure 3.29 Option (a) Frequency multiplexing of SS Block and RMSI transmission.

The third option for the system is to utilize fully FDM multiplexing between SS Block and RMSI CORESET, and PDSCH for RMSI delivery where the signals are confined within a bandwidth supported by all NR UEs. The PDCCH carrying CORESET utilizes two OFDM symbols and PDSCH carrying RMSI utilizes another two OFDM symbols, having exactly the same time duration as the SS Block as shown in Figure 3.30. This option is supported only at above 6 GHz bands.

Schematic displaying 2 adjacent boxes labeled PDCCH RMSI CORESET and PDSCH for RMSI with a vertical two-headed arrow labeled Initial Active DL Bandwidth Part at the right. A box labeled SS Block is attached at the top side.

Figure 3.30 Option (b) Frequency multiplexing of SS Block and RMSI transmission.

The numerology used for SS Block transmission is defined for each band. For CORESET and PDSCH carrying RMSI the used numerology is signaled in PBCH as shown in Table 3.7. For below 6 GHz, either 15 or 30 kHz SCS can be used, above 6 GHz either 60 or 120 kHz is used.

Multiplexing option 1 supports all different combinations whereas option 2 can be used with SS Block SCS of 120, or 240 kHz and RMSI numerologies of 60 or 120 kHz. For option 3, shown in Figure 3.30, the numerology of 120 kHz must be used for both SS Block and RMSI transmission.

3.3.2.2 Other System Information

The broadcast delivery of OSI is supported by PDSCH transmission scheduled via PDCCH (like RMSI delivery). The same DL numerology is used for broadcasted OSI as is used for RMSI and as informed in the PBCH payload. Both slot‐based PDCCH and PDSCH, and non‐slot‐based PDSCH transmissions for broadcast OSI delivery are supported. For the non‐slot‐based transmission, 2, 4, and 7 OFDM symbol duration for the broadcast OSI PDSCH is supported. CORESET configuration will follow the one provided by PBCH for RMSI CORESET, but the TDM parameters (i.e. search space) are provided in RMSI.

3.3.3 Downlink Data Transmission

Section 3.2.3 discusses the basic frame structures and introduces the PDSCH/PUSCH resource allocation. The actual downlink data transmission and HARQ work within this setup and is not too different from the basic setup in LTE, but some critical differentiators exist. The PDCCH location can be freely placed within the slot, and the number of symbols allocated for PDSCH is very flexible, even though the typical mode of operation is to place the PDCCH in the beginning of the slot, followed by DMRS and PDSCH, not too alike from the LTE setup.

Unlike in LTE, the maximum allowed timing advance (TA) is not baked inside a fixed HARQ‐ACK timeline, but the HARQ‐ACK timing can be scheduled freely, if the UE has guaranteed sufficient time to process the received packet and prepare for the HARQ‐ACK. This means that the gNB must factor in the TA budget on top of the UE processing time and delay the HARQ‐ACK with large TA settings. This allows for supporting large cell radii without having to have the extra budget for the corresponding two‐way propagation delay embedded in the HARQ loop latency in normal or small cell deployments. Unlike in LTE, where the PDSCH to HARQ‐ACK transmission delay is defined by the standard, the NR specification just defines the minimum processing time the UE must be guaranteed before it can be expected to be able to reliably report HARQ‐ACK.

In addition to the asynchronous HARQ‐ACK timing, the downlink HARQ retransmissions are also asynchronous. The UE may be configured to maintain up to 16 HARQ processes, and the PDCCH scheduling the PDSCH carries a HARQ process number and a new data indicator. This enables asynchronous HARQ retransmissions with no pre‐determined time relation between different transmission attempts and facilitates different gNB architectures, so that, e.g. a high latency fronthaul deployment can use a larger number of HARQ processes and longer retransmission delay and a low latency fronthaul deployment uses conversely a lesser number of HARQ processes and shorter retransmission delay.

The NR downlink also supports semi‐persistent scheduling like LTE, where the first scheduling PDCCH triggers a periodically repeating transmission occasion without needing to schedule every packet separately. This solution is aimed at reducing the required control channel capacity when supporting many simultaneous voice links in a cell.

Multi‐slot transmission (known also as slot repetition or slot aggregation) for improved coverage is possible both in downlink and uplink. As uplink is typically the coverage limiting link, the downlink multi‐slot transmission may not be as useful though. RRC can configure the UE to expect that the gNB repeats each TB transmitted in the downlink over a specific number of consecutive slots. The redundancy version is cycled across the slots so that full IR combining gain can be achieved. If some of the slots in the span of the consecutive slots are configured to be uplink slots, then those slots are omitted, but still counted as part of the slot aggregate.

3.4 Uplink Physical Layer

3.4.1 Random Access

For contention‐based random access (CBRA), NR supports a 4‐step RACH procedure like LTE. A UE initiates the procedure by transmitting a Physical Random Access Channel (PRACH) preamble in Message 1 (Msg1). Upon detection of the preamble, the gNB responds with Message 2 (Msg2) containing Random Access Response (RAR). The gNB uses PDCCH for scheduling and PDSCH for transmitting Msg2 within a configured TDM RAR window. The RAR includes UL grant for the Message 3 (Msg3) that is transmitted by the UE using PUSCH. Finally, the gNB transmits a contention resolution message (Msg4) using PDCCH for scheduling and PDSCH for transmitting the message. Physical channels for Msg1 to Msg4 together with the main content of each message are illustrated in Table 3.8. For contention‐free random access, only Msg1 and Msg2 are needed.

Table 3.8 Physical channels for messages in NR RACH procedure.

Message Physical layer channel Content
Message 1 PRACH RACH Preamble
Message 2 PDCCH, PDSCH Detected RACH preamble ID, Timing Advance, UL grant, C‐RNTI
Message 3 PUSCH RRC Connection request, Scheduling request
Message 4 PDCCH, PDSCH Contention resolution message

Numerology for PRACH preamble is provided in the RACH configuration (RMSI for standalone deployment). For Msg2 and Msg4 transmissions the SCS is the same as for RMSI (both PDCCH and PDSCH), for Msg3 the numerology is provided in the RACH configuration separately from the Msg1 SCS. For contention‐free Random Access (RA) procedure for handover, the SCS for Msg1 and the SCS for Msg2 are provided in the handover command.

As a new component, corresponding to multiple SS Block locations within a half frame, NR supports receive beamforming in TDM for PRACH preamble reception by allocating multiple RACH occasions for which the gNB may use different receive beams. PRACH occasion and PRACH preamble selection by the UE also signals the preferred SS Block beam that will be used for Msg2 and Msg4 transmissions.

That is done by configuring an association between SS Block and a subset of RACH resources, one or multiple RACH occasions, and/or a subset of PRACH preamble indices, for determining Msg2 and Msg4 DL TX beam. Based on the DL measurement on SS Block(s) and the corresponding association, the UE selects the subset of RACH resources and/or the subset of RACH preamble indices for PRACH preamble selection. The association between one or multiple occasions for SS Block and a subset of RACH resources and/or subset of preamble indices is provided to the UE by RMSI (contention‐based RACH) or known to the UE. The association between SS Blocks and RACH preamble indices and/or RACH resources is based on the transmitted SS blocks indicated in RMSI.

In handover case, a source cell can indicate in the handover command, the association between RACH resources and CSI‐RS configuration(s) or association between RACH resources and SS blocks. In other words, for mobility the association may be defined between CSI‐RS resource(s) and PRACH resources.

NR supports both long and short sequence‐based PRACH preambles. Long sequences are targeted to macro, and extended coverage and long‐range deployments. Short sequences are introduced to support efficient small cell deployments and efficient beamforming support. In addition, one design principle related to short sequences was the ability to configure the same SCS for PRACH preamble as for other uplink channels like NR‐PUCCH and PUSCH.

3.4.1.1 Long Sequence

Sequence length for the long sequence‐based PRACH preamble is 839 like in LTE. Also, Zadoff‐Chu sequence family known from LTE is used. Long sequence preambles are supported for below 6 GHz deployments with two different SCS: 1.25 and 5 kHz. Three different formats have been defined for 1.25 kHz SCS targeting at LTE refarming, support of cells up to 100 km range and extended coverage. One format has been designed for the 5 kHz SCS option, especially for high speed cases. All long sequence‐based formats support type A and type B restricted sets to support different mobility scenarios. Type A and B restricted sets consist of PRACH preambles that have cyclic shift distance allowing unambiguous preamble detection under doppler frequency up to ±1 SCS and up to ±2 SCS. Table 3.9 illustrates the long sequence‐based PRACH preamble formats.

Table 3.9 Long sequence PRACH preambles.

Format LRA Δf RA (kHz) Nu NCPRA TGP Use case Support for restricted sets
0 839 1.25 24576κ 3168κ 2975 LTE reframing Type A, Type B
1 839 1.25 2*24576κ 21024κ 21 904 Large cells, up to 100 km Type A, Type B
2 839 1.25 4*24576κ 4688κ 4528 Coverage enhancement Type A, Type B
3 839 5 4*6144κ 3168κ Type A, Type B

3.4.1.2 Short Sequence

Sequence length for short sequence‐based PRACH preamble formats is 139 and the Zadoff‐Chu sequence family is used. Short sequence preamble formats support different numerologies, namely 15 and 30 kHz at below 6 GHz, and 60 and 120 kHz at above 6 GHz. Starting symbol for the PRACH preamble format can be symbol 0 or 2 within a slot. The latter is to provide room for PDCCH in the beginning of the slot.

A random access preamble format consists of one or multiple random access preamble(s). A random access preamble consists of one preamble sequence and (CP), and one preamble sequence consists of one or multiple RACH OFDM symbol(s). Furthermore, RACH occasion (RO) is defined as the time‐frequency resources on which the PRACH preamble is transmitted using the configured PRACH preamble format with a single transmit beam at the UE. There are nine base formats as shown in Table 3.10.

Table 3.10 Base formats for short sequence‐based PRACH preambles.

Format LRA Δf RA (kHz) Nu NCPRA
A1 139 15 × 2μ 2 × 2048κ × 2μ 288κ × 2μ
A2 139 15 × 2μ 4 × 2048κ × 2μ 576κ × 2μ
A3 139 15 × 2μ 6 × 2048κ × 2μ 864κ × 2μ
B1 139 15 × 2μ 2 × 2048κ × 2μ 216κ × 2μ
B2 139 15 × 2μ 4 × 2048κ × 2μ 360κ × 2μ
B3 139 15 × 2μ 6 × 2048κ × 2μ 504κ × 2μ
B4 139 15 × 2μ 12 × 2048κ × 2μ 936κ × 2μ
C0 139 15 × 2μ 2048κ × 2μ 1240κ × 2μ
C2 139 15 × 2μ 4 × 2048κ × 2μ 2048κ × 2μ

Given the base formats, 10 different formats can be configured: A1, A2, A3, B1, B4, C0, C2, A1/B1, A2/B2, and A3/B3. In Ax/Bx all formats but last preamble use Ax format and the last preamble in a slot uses the format Bx. Correspondingly, there are the following number of occasions within a slot for different configurable formats as described in Table 3.11.

Table 3.11 Number of RACH occasions within a slot.

Format Number of RACH occasions within a slot Format Number of RACH occasions within a slot
A1 6 C0 4
A2 3 C2 2
A3 2 A1/B1 6 or 7
B1 6 or 7 A2/B2 3
B4 1 A3/B3 2

Figure 3.31 exemplifies A1 format within a slot together with two different starting symbols: #0 and #2.

Schematic with a row of adjacent shaded boxes for PUSCH (top), a row of adjacent boxes with various patterns for Starting symbol #0 (middle), and a row of adjacent boxes with various patterns for Starting symbol #2 (bottom).

Figure 3.31 PRACH preamble formats A1 with two different starting symbols within a slot: #0 and #2.

3.4.2 Uplink Data Transmission

Section 3.2.3 discusses the basic frame structures and introduces the PDSCH/PUSCH resource allocation. The uplink data transmission and HARQ work within this setup and are not too different from the basic setup in LTE, but some differentiators exist. The number of symbols allocated for PUSCH is fully flexible, and the timing relative to the scheduling PDCCH can be dynamically chosen by the gNB. As in the downlink case, the typical mode of operation is to place the PDCCH in the beginning of the slot, and the uplink DMRS and PUSCH will follow in the next slot (FDD) or in the next uplink slot (TDD).

As with the downlink HARQ‐ACK timing, the maximum allowed TA is not baked inside the PDCCH‐to‐PUSCH delay, but the PUSCH timing can be scheduled freely with the scheduling PDCCH, if the UE is guaranteed sufficient time to process the PDCCH and encode the data packet for transmission on the PUSCH. This means that the gNB must factor in the TA budget on top the UE processing time defined in the standard. This allows for supporting large cell radii without having to have the extra budget for the corresponding two‐way propagation delay embedded in the PUSCH preparation latency in normal or small cell deployments. Unlike in LTE, where the PDCCH to PUSCH transmission delay is defined by the standard to accommodate also the largest supported cell radius, the NR specification just defines the minimum processing time the UE must be guaranteed with before it can be expected to be ready to start the PUSCH.

In addition to the asynchronous PDCCH to PUSCH timing, there is no HARQ timing for PUSCH in the specifications. Each transmission is independently scheduled with PDCCH that carries the HARQ process number and new data indicator. This enables asynchronous HARQ retransmissions with no pre‐determined time relation between different transmission attempts and facilitates different gNB architectures so that, e.g. a high latency fronthaul deployment can use a larger number of HARQ processes and longer retransmission delay and a low latency fronthaul deployment conversely uses a lesser number of HARQ processes and shorter retransmission delay. The UE always supports 16 HARQ processes, but the gNB only uses as many as it needs.

Multi‐slot transmission (known also as slot repetition or slot aggregation) for improved coverage is possible both in downlink and in uplink. As uplink is typically the coverage limiting link, it is likely the more beneficial of the two in real deployments. RRC can configure the UE to repeat each transport block over a specific number of consecutive slots. The redundancy version is cycled across the slots so that full IR combining gain can be achieved. If some of the slots in the span of the consecutive slots are configured to be downlink slots, those slots are omitted, but still counted as part of the slot aggregate. The same symbol and frequency allocation is used across all the repeated slots.

Like in LTE, NR uplink supports frequency hopping for PUSCH to equalize the impact of frequency selective channels. Two types of frequency hopping are supported, but operation without frequency hopping is possible as well. When multi‐slot transmission is not used, the frequency hop takes place once, in the middle of the allocated transmission duration, e.g. if 10 symbols were allocated for the PUSCH, then the frequency hop takes place in between the 5th and 6th symbol of the transmitted transport block (TB). The frequency offset to be applied is fully configurable. When multi‐slot transmission is used, it is possible to configure the link to hop once per slot, rather than once within a slot.

3.4.3 Contention‐Based Access

NR uplink supports two types of transmissions with configured grant (grant‐free transmissions). One of the types, type 2, is often referred to as semi‐persistent scheduling like for NR downlink and LTE as discussed in Section 3.3.3, where the first scheduling PDCCH triggers a periodically repeating transmission occasion without the need to schedule every packet separately. This solution is aimed at reducing the required control channel capacity when supporting many simultaneous voice links in a cell.

The other type of transmission with configured grant, type 1, is analogous to circuit switched radio, where the RRC configuration provides the UE with a set of time/frequency resources as transmission opportunities as well as the MCS to use, and the UE transmits on a given resource if it has any data. The gNB needs to detect the presence of the transmission, e.g. from the UE‐specific DMRS, and to attempt to decode the transmission based on the provided configuration. Multi‐slot transmission with RV cycling can be used in conjunction with the configured grant transmissions. Retransmissions after failed decoding attempts are scheduled using normal uplink scheduling procedure using PDCCH.

3.5 Radio Protocols

3.5.1 Overall Radio Protocol Architecture

In NR, a set of radio protocol layers are used to convey different IP‐packet formats to the PHY as user plane data. In addition to IP‐packets, NR supports transport of Ethernet MAC frames as user plane data. These data packets are transmitted over Data Radio Bearers (DRBs). NR radio protocols provide also reliable transport of the RRC protocol message via Signaling Radio Bearers (SRBs). The RRC provides an overall toolbox for RRM and UE configuration as well as it conveys different Non‐Access Stratum (NAS) signaling messages between core network (CN) and UE. To achieve this, and to support different network architectures, NR radio protocol layers have several functions for efficient radio operation. In the following sections the radio protocol is explained in detail.

The NR user plane and control plane radio protocol architecture is depicted in Figures 3.32 and 3.33 (see [36]) Compared to the previous generation, i.e. UMTS or LTE, the new Service Data Adaptation Protocol (SDAP) was introduced to the user plane. The main function of SDAP is to enable radio protocol support for the 5G Quality of Service (QoS) framework. The new QoS framework is discussed in Chapter 6, details of SDAP functions are presented in Section 3.5.5.

Control plane protocol stack with 3 panels for UE, gNB, and AMF (left-right). NAS in UE is linking to NAS in AMF. RRC, PDCP, RLC, MAC, and PHY in UE are linking to RRC, PDCP, RLC, MAC, and PHY in gNB, respectively.

Figure 3.32 Control plane protocol stack.

User plane protocol stack depicted by 2 panels for UE and gNB. Panel for UE contains boxes for SDAP, PDCP, RLC, MAC, and PHY that are linking to boxes for SDAP, PDCP, RLC, MAC, and PHY in the panel for gNB, respectively.

Figure 3.33 User plane protocol stack.

Most parts of user and control plane protocol layers are very similar, like in LTE. However, all protocols must be re‐designed to meet 5G requirements and to support NR network architecture options. The options include different dual connectivity solutions essential for the UE having simultaneous connections on low operating frequencies and mmWave frequencies. The protocol architecture with master gNB bearer dual connectivity is depicted in Figure 3.34.

Schematic displaying a left panel with lines from MAC to RLC linking to PDCP in a panel for MgNB at the right. Panel for MgNM has linked boxes for SDAP, PDCP, etc. 2 SDAP boxes are linking to MCG split bearer and MCG bearer.

Figure 3.34 MgNB bearers for dual connectivity [36].

In addition, the radio protocol architecture is designed to support different radio network cloudification solutions, where parts of the protocols are implemented in the cloud. Different architecture options are discussed in Chapter 4.

Perhaps the most significant driver for re‐design of the radio protocols, was to obtain a more processing friendly design for high data as well as low latency connections. This was seen essential for UE chipset implementations but gains for BS implementations are also significant.

3.5.2 Medium Access Control (MAC)

The MAC layer is in the center of all NR procedures. It is the interface between Layer 1 and upper layers. The MAC protocol and its functions are specified only for the UE in NR (see [37]):

  1. – Mapping between logical channels (LCHs) and transport channels according to the transferred traffic type as presented in Section 3.5.2.1.
  2. – Multiplexing/demultiplexing of MAC Service Data Units (SDUs) belonging to one or different LCHs into/from transport blocks (TB) delivered to/from the PHY on transport channels. This function includes concatenation of SDUs. Concatenation is solely performed by MAC in the NR radio protocol stack.
  3. – Scheduling information/buffer status reporting (BSR) to the network for receiving UL grants by means of the Scheduling Request (SR) procedure, which may involve triggering of the Random Access procedure in the absence of dedicated SR resource configuration.
  4. – Error correction through HARQ. Adaptive and asynchronous HARQ is used for both uplink and downlink directions, fully under control of the network
  5. – Priority handling between LCHs by means of logical channel prioritization (LCP) procedure to support QoS enforcement among different services that run in parallel in the UE. The MAC entity may be configured to restrict mapping of a LCH to a grant with certain L1 characteristics, which may include, e.g. the used numerology and/or the Transmission Time Interval (TTI) duration.
  6. – Padding: MAC is the only radio protocol layer in charge of padding. Padding is performed in case not enough data can be multiplexed within one TB that is granted for transmission in L1. The MAC should always maximize the transmission of data, i.e. padding is usually only allowed in case there is not enough data in the buffer to fill the full grant.
  7. – Beam failure management to support L1 procedures. MAC is responsible for triggering the UE‐based beam management procedures like beam failure detection (BFD) and recovery.

In comparison with LTE, the MAC layer in NR holds the same set of functions added with the beam failure management procedures in support for systems operating at high frequencies. However, much thought has been put on the actual processing of data and control in the MAC layer as well as the design of the Protocol Data Unit (PDU) structure to be able to squeeze the UE's UL grant‐to‐data transmission time to an absolute minimum.

3.5.2.1 Logical Channels and Transport Channels

The MAC entity provides an interface to Radio Link Control (RLC) entities through LCHs, which are characterized either as control channels carrying different kind of control plane data or traffic channels carrying user plane data. The following LCHs are defined for control plane data:

  1. – Broadcast Control Channel (BCCH): for the provisioning of system information messages to the UE which include either Master Information Block (MIB) or System Information Block(s) (SIB). Applicable only in DL direction.
  2. Paging Control Channel (PCCH): for the provisioning of paging messages to UEs in the cell which is used to page the UEs in IDLE or INACTIVE mode, to indicate change in system information, or to provide indication that a warning system message or public warning system (PWS) message is to be broadcasted. Applicable only in DL direction.
  3. – Common Control Channel (CCCH): for the transmission of non‐secured RRC messages of SRB0 between UE and network prior to establishment of SRB1 and ciphering. Applicable both in UL and DL.
  4. Dedicated Control Channel (DCCH): for the transmission of RRC messages of SRB1, SRB2, or SRB3 between UE and network to configure/maintain the RRC connection or to convey piggybacked NAS messages.

The following LCH is defined for user plane data:

  1. Dedicated Traffic Channel (DTCH): for the transmission of all user plane data. Multiple DTCHs can be established for different type of user plane data according to their QoS requirements.

Interface from L1 to serve MAC entity is handled through transport channels, whereas MAC is responsible for mapping the data from LCHs to proper transport channels. Transport channels are defined based on the L1 characteristics on how and when information is transmitted. Through transport channels L1 provides MAC with a transport block (TB) and its size, based on the mapping of PHY parameters like MCS and slot length. MAC fills the TB with a MAC PDU carrying the data.

The following transport channels are defined in downlink:

  1. Broadcast Channel (BCH): for the transmission of MIB from the BCCH LCH, i.e. it only conveys part of the system information carried in BCCH. The BCH uses a fixed sized container and has fixed position/periodicity in L1, so it does not need to be scheduled.
  2. Paging Channel (PCH): for the transmission of paging messages from the PCCH LCH. The PCH has configurable periodicity and has flexible message size, hence, it is scheduled each time using the Paging Radio Network Temporary Identifier (P‐RNTI), which is common in the system. However, the used transport format is limited by the minimum supported UE category in the system.
  3. Downlink Shared Channel (DL‐SCH): for the transmission of downlink control and user plane data from BCCH (excluding MIB), CCCH, DCCH, and DTCH LCHs. Each UE is allocated to a DL‐SCH when scheduled, but the UE may be scheduled using multiple DL‐SCHs within one time instant, e.g. through carrier aggregation. Furthermore, DL‐SCH used for system information messages from BCCH may be transmitted simultaneously. DL‐SCH is flexibly configurable on a per UE basis according to supported UE capabilities.

The following transport channels are defined in uplink:

  1. Uplink Shared Channel (UL‐SCH): for the transmission of uplink control and user plane data from CCCH, DCCH and DTCH LCHs. It is the equivalent of D‐SCH in uplink direction.
  2. – RACH: for the transmission of random access preamble. No LCH maps to RACH, since RACH does not carry any data above MAC, the random access preamble and used resource selection is performed by the MAC entity.

Mapping between the LCHs and transport channels is presented in Figures 3.35 and 3.36 for downlink and uplink.

Schematic displaying a horizontal line for downlink logical channels with ovals labeled PCCH, BCCH, CCCH, DCCH, and DTCH linking to ovals labeled PCH, BCH, and UL-SCH on the horizontal line for downlink transport channels.

Figure 3.35 Downlink logical channel mapping to transport channels.

Schematic displaying a horizontal line for uplink logical channels (top) with ovals labeled CCCH, DCCH, and DTCH linking to an oval labeled UL-SCH on the horizontal line for uplink transport channels (bottom).

Figure 3.36 Uplink logical channel mapping to transport channels.

3.5.2.2 MAC PDU Structures for Efficient Processing

MAC PDU structure in NR used for data transmission, i.e. data transmitted over UL‐SCH and DL‐SCH, is designed to support very efficient processing both in the transmitter and receiver. This is motivated by the very high 5G data rates, need to enable concatenation at MAC layer as well as to remove the concatenation feature from RLC protocol. This necessitates much more MAC SDUs to be multiplexed within one MAC PDU. Furthermore, processing efficiency is mainly targeted in the UE to be able to support as short grant‐to‐transmission times and data reception‐to‐feedback transmission times as possible.

Unlike in conventional 3GPP systems where the MAC header with a number of sub‐headers is packed in front of the MAC PDU before the data, SDUs or control elements (CEs), NR interlaces each MAC sub‐header with the corresponding MAC SDU or CE. The combination of MAC sub‐header and MAC SDU/CE is called “MAC subPDU,” both of which are multiplexed one by one into the MAC PDU. Such a design enables pipeline processing to be used both in the transmitter as well as in the receiver. The transmitter can start feeding L1 with the MAC PDU in each MAC subPDU at a time before the whole MAC PDU has been constructed. Similarly, the receiver can pass a part of the MAC PDU for MAC processing even before the entire slot has been received. That is, parallel L2 and L1 processing can be performed even for one transmission/reception.

MAC PDU structures for downlink and uplink are illustrated in the following Figures 3.37 and 3.38.

Image described by caption and surrounding text.

Figure 3.37 MAC PDU structure in DL.

Image described by caption and surrounding text.

Figure 3.38 MAC PDU structure in UL.

As can be seen in Figures 3.37 and 3.38, for downlink MAC PDU the MAC CEs are multiplexed together in front of the PDU. For uplink MAC PDU they are still multiplexed at the end of the PDU but before padding. The downlink design allows the UE to process any MAC control received in downlink as soon as possible. For uplink, putting the control information at the end of the MAC PDU allows the UE to process such control as late as possible in the transmitter, e.g. the buffer status report shall consider any data multiplexed in the PDU where the buffer is reported.

MAC sub‐headers are also simplified compared to LTE. There is no explicit bit indicating whether another MAC SDU/CE will follow, this is implicitly determined from the MAC SDU/CE length, the remaining MAC PDU size, and the possible padding sub‐header. MAC padding is also only multiplexed at the end of the MAC PDU by indicating one‐byte MAC sub‐header without length after which all remaining bits in the PDU are all padding. The used length field size is only indicated dynamically in the sub‐header to support various SDU sizes, or otherwise the LCID (Logical Channel ID) field is used to determine whether the MAC sub‐header comes with the L field or not (see Figure 3.39).

MAC sub-header structures with one byte length field (top left), two byte length field (top right), and without length field (bottom).

Figure 3.39 MAC sub‐header structures.

3.5.2.3 Procedures to Support UL Scheduling

The UE MAC layer implements a LCP function, which performs the actual scheduling of UL data from different LCHs to the available resources in the MAC PDU. Furthermore, several procedures are defined in the UE MAC layer to support the gNB with UL scheduling decisions. These functions are SR, BSR and power headroom reporting (PHR). These procedures are specified similarly to LTE, however, NR introduced an enhancement called LCP restrictions to support the various 5G services more efficiently also in UL direction.

In the LCP function of the UE the MAC entity uses a bucket size scheduler to decide how much data is multiplexed from each LCH to a certain MAC PDU. Each LCH is assigned a priority, prioritized bit rate (PBR), and bucket size duration (BSD) based on which the bucket size for each LCH is calculated. Furthermore, given the introduction of multiple sub‐carrier spacings (SCS), non‐slot‐based scheduling, and in general the various services to be supported, the LCP function can be configured to apply LCP restrictions in the scheduling. With LCP restrictions, data from a certain LCH can be restricted to be mapped only to a certain type of grant categorized by its PHY characteristics. The grant is categorized by transmission duration, used SCS, used carrier, and whether it is a dynamically scheduled or configured grant. For instance, this allows URLLC data to be multiplexed only to “fast grants,” which are usually short in terms of transmission duration and gNB feedback is fast. Furthermore, LCP restrictions are applied in case of Packet Data Convergence Protocol (PDCP) duplication to avoid multiplexing of duplicated data in one MAC PDU. PDCP functions are discussed in Section 3.5.4.4.

The SR procedure is used by the UE MAC entity to indicate to the gNB the available data in UL buffers. The MAC entity may be configured by the network with multiple SR configurations and each UL LCH can be associated with a certain SR configuration. This enables the serving gNB to determine the service for which the UE has UL data available through the LCH association already from the SR transmission and hence allows to provide the right grant type. This is especially useful when LCP restrictions are configured. In case UE has data in its buffers for different LCHs associated with different SR resources, it can have multiple SR procedures ongoing at any point in time.

A UE uses BSR to indicate to the serving gNB the amount of available data in the UL buffers. Buffer is reported on a per Logical Channel Group (LCG) basis which may consist of one or more LCHs. NR implements 8 LCGs compared to 4 in LTE which is justified by the support of twice more DRBs (29 as the maximum) and PDCP duplication (see Section 3.5.4.4). As the BSR is multiplexed to any type of grant in UL, enhancements are designed to avoid any additional latency for latency sensitive data. In case the BSR was triggered by latency sensitive data and the LCP restrictions do not allow multiplexing the data into the available grant, a SR can be triggered to indicate latency sensitive data presence regardless of the multiplexed BSR.

PHR is used to indicate to the serving gNB the available UL power the UE can still use. The power headroom (PH) is reported per each serving cell with configured UL carrier in case of carrier aggregation. By means of knowing the PH for each serving cell, the gNB can adjust its scheduling decisions such that the available UL maximum transmit power is not exceeded and hence the probability for errors is not increased.

3.5.2.4 Discontinuous Reception and Transmission

For UE power saving during active data transmission, NR supports discontinuous reception (DRX) and UL grant skipping, which are applicable when there are short moments without DL or UL data. The network configures DRX according to the traffic characteristics of the active service, however, as there may be multiple active services, the DRX concept also adapts to these by means of two different DRX cycle durations that can be used simultaneously.

DRX is applied to enable the UE Rx to sleep when there is no expected data transmission in DL. Basically, DRX controls the need for the UE to monitor the PDCCH scheduling decisions and is dictated by several timers including inactivity and HARQ Round Trip Time (RTT) timers and network commands through MAC CEs. Two different DRX cycles can be configured, a Short and a Long DRX cycle, from which the Short cycle is used for short sleeping periods when data comes to the buffer in short bursts. The Long cycle is applied when data is not expected for a longer time and a longer sleep duration can be applied. The gNB can also indicate with MAC CE to the UE to go directly to Long DRX, e.g. in case gNB determines it has no more DL data to be sent to the UE. The sleep times enabled by the Short and Long DRX vary from 2 milliseconds up to ∼10 seconds, so various types of configurations and services can be supported. Compared to LTE, the main notable difference in the NR DRX concept is that the different cells configured for the UE may use different sub‐carrier spacings which means that the slot boundaries are not aligned. Regardless of this, common DRX in the MAC entity is applied. As the HARQ RTT timers are cell specific, those are depending on the used sub‐carrier spacing resulting in different timer values in the different cells.

Uplink grant skipping is used to avoid unnecessary transmissions in the UL whenever the UE has no data to transmit. This reduces UE energy consumption as well as UL interference introduced to neighboring cells. Such skipping is always used by the UE for configured grants (e.g. “grant‐free scheduling”) as the same grant may be configured for multiple UEs to be used, the collision probability is lowered as the UEs without data will skip the transmission in such a grant. For dynamic grants, the uplink grant skipping is configurable and enables the gNB to do blind scheduling without compromising the UE energy consumption. Given the fact that the UL grant skipping may be enforced because of configured LCP restrictions, in the case where there is data in the UL buffers and Periodic BSR is triggered, the BSR is included in a MAC PDU and the grant should not be skipped.

3.5.2.5 Random Access Procedure

Compared to LTE, the NR Random Access (RA) procedure is enhanced to support multi‐beam operation and supplemental (supplementary) uplink (SUL) bands. Furthermore, Beam Failure Recovery (BFR) procedure (see Section 3.5.2.6) and on‐demand system information request are added as new use cases for the RA procedure. The RA procedure is triggered by the MAC entity itself, e.g. for UL data arrival, SR failure, or BFR, by the RRC layer, e.g. for initial access or handover, or by the network through PDCCH order, e.g. for DL data arrival, or Secondary Cell (SCell) addition with different UL timing.

Both contention based random access (CBRA) and contention free random access (CFRA) procedures are supported. CBRA preamble is always associated to a certain SS Block beam which is configured by the network, whereas CFRA preamble can be associated to an SS Block beam or to a beam which is identified by using CSI‐RS and is dedicatedly configured to the UE. Before each preamble transmission, the UE firstly selects an SS Block, in case of CBRA, or in case of CFRA an SS Block or CSI‐RS based on which the preamble selection can be performed. The selection between SS Block and CSI‐RS is based on network configured Reference Signal Received Power (RSRP) thresholds measured from the DL beams.

For CBRA, the preamble space from which the UE randomly selects a preamble may be allocated to two different preamble groups, Random Access preamble groups A or B. Group B is used to indicate that the UE has more data in its buffers than a configured threshold, being a rough indication about the UE buffer status. In such a case, the gNB can give the UE a bigger grant already in the RAR. Each SS Block includes preambles from both group A and B in case group B is configured in the cell.

CBRA is a four‐step procedure which involves:

  1. 1. PRACH preamble transmission by the UE, where the UE selects randomly a preamble associated to the selected SSB and the selected RA preamble group (A or B).
  2. 2. RAR transmission by the gNB addressed to RA‐RNTI (Random Access Radio Network Temporary Identifier), which may include responses to multiple preambles transmitted by multiple UEs in the first step. However, only one response is transmitted to one PRACH preamble regardless if the gNB can detect that multiple UEs transmitted the same preamble.
  3. 3. Msg3 transmission in UL by the UE, which is used by the UE to identify itself, i.e. the UE identity is multiplexed into the MAC PDU of Msg3 provided either by the RRC or MAC layer.
  4. 4. Contention resolution message transmission by the gNB, which identifies the UE by the UE ID provided in Msg3. In case multiple UEs transmitted the same preamble, only the UE with proper contention resolution ID can proceed after this point, other UEs must trigger a new preamble transmission.

CFRA is a two‐step procedure which involves:

  1. 1. Pre‐configured PRACH preamble transmission by the UE, where the UE transmits a gNB allocated preamble associated to the selected SSB or CSI‐RS.
  2. 2. RAR transmission by the gNB addressed to either RA‐RNTI or C‐RNTI (Cell Radio Network Temporary Identifier). Only in the case of BFR, RAR transmission may be addressed directly to C‐RNTI of the UE since UL timing alignment is available, otherwise the UE will decode the RA‐RNTI associated to the PRACH where the preamble was transmitted.

Based on measurement results provided by UE, the gNB can only configure CFRA preambles to a subset of beams provided by the cell. This is to reduce signaling overhead as well as system flexibility, as the UE need not be allocated a CFRA preamble from beams where it is unlikely to be served. However, e.g. during the handover procedure when the UE is moving, the UE might find a beam which is not allocated with CFRA preamble as the one where it wants to get served. Hence, both CFRA and CBRA preamble transmissions may happen within the same RA procedure. The UE prioritizes the beams with allocated CFRA resources, however, if such a beam is unavailable (dictated by a RSRP threshold,) the UE will perform CBRA toward a selected SSB beam.

In support for SUL operation, when the RA procedure is triggered, the UE firstly selects the UL carrier (UL or SUL) to perform the RA. The selection is based on a DL RSRP threshold measured from the DL signal and in case the DL signal level is below the threshold, SUL is used. This means that the UE cannot switch to UL from SUL, or vice versa, during one RA procedure, but will run the procedure until the end through the initially selected carrier. Such a principle is adopted to simplify UE operation as no fresh DL measurements are needed for every preamble transmission attempt.

Furthermore, the RA procedure in NR includes a concept of prioritized random access where by network configuration, certain UEs may be prioritized in the RA procedure. The prioritized RA is only applicable to UEs performing BFR or handover and happens via different configuration of parameters used in the RA procedure. For instance, the UE can be configured to ramp up its UL transmission power more rapidly than other UEs or in case of overload in the PRACH, scale down the backoff time if this is indicated in response to the RA preamble transmission.

3.5.2.6 Beam Failure Management

Beam failure management is performed in the MAC layer by support of PHY measurements and involves two procedures: BFD and BFR. By means of the BFD and BFR procedures, the serving beams used for communication between UE and gNB can be recovered rapidly without involving any upper layer failure procedures, like radio link failure (RLF) procedure. These procedures complement the beam management procedures discussed in Chapter 5.

BFD is based on beam failure instance (BFI) indications from the PHY based on which MAC determines when the serving beams are in a failure condition. The UE is configured to monitor certain BFD‐RS(s) (Beam Failure Detection Reference Signal) to evaluate, if the serving beams are workable based on hypothetical PDCCH BLER (Block Error Rate). In case all the serving beams are determined to be in failure condition, PHY provides a BFI indication to the MAC layer. The BFD procedure in MAC is then dictated by a timer and a counter which calculates the number of BFI indications received. Every time the BFI indication is received, the timer is restarted and in case the timer expires, the counter is reset. Hence, the timer duration determines the sufficient period within which the beams are workable. However, in case the counter reaches a configured threshold value before the timer resets the counter, beam failure is detected which triggers the BFR procedure. The BFD operation principle is illustrated in Figure 3.40.

Schematic of beam failure detection principle, depicted by a rightward dashed arrow with 6 short vertical lines and 2 thick arrows labeled Timer TBFD. Solid thick arrows represent beam failure instance indications.

Figure 3.40 Beam failure detection principle.

BFR is triggered because of detecting a beam failure, which is used by the UE to indicate to the serving gNB a new candidate beam to serve the UE. The procedure is performed by triggering the Random Access procedure (see Section 3.5.2.5). The network may configure the UE with a set of candidate beams based on SS Block or CSI‐RS beams, which are associated with CFRA preambles. For instance, the network may determine that such beams are the most likely candidates in case beam failure happens. By means of CFRA preamble, the new preferred DL beam is indicated from which the gNB can start scheduling the UE again and apply the beam management procedures in the PHY. As discussed in the previous RA procedure section, beams with allocated CFRA preamble resources are prioritized in the RA procedure and hence the UE prioritizes the candidate beams configured. However, in case no candidate beams are available (dictated by a configured RSRP threshold), fallback to CBRA is enforced and a workable SSB beam is indicated.

The BFR principle is illustrated in Figure 3.41.

Schematic of beam failure recovery principle depicted by a phone, a box containing text, and a brick wall with a small cell icon with serving beams (left) and a small cell icon with candidate beams (right).

Figure 3.41 Beam failure recovery principle.

The BFR procedure is successful in the case that the corresponding RA procedure is successfully completed. Hence, in the case where the RA procedure fails, the BFR procedure also fails, leading to declaration of RLF and enforcement of RRC level recovery.

3.5.3 Radio Link Control (RLC)

The RLC protocol is responsible for segmenting the PDCP PDUs to fit within the MAC PDU, minimizing the needed padding as well as ensuring lossless delivery of the data by means of the Automatic Repeat Request (ARQ) protocol, as defined in [38]. Compared to LTE, NR removed functions from RLC for processing efficiency moving them to PDCP and MAC, reordering/in‐order delivery is done by PDCP and the concatenation function is performed along the MAC multiplexing.

Three data transfer modes can be configured for an RLC entity depending on the type of data it serves: Transparent Mode (TM), Unacknowledged Mode (UM) and Acknowledged Mode (AM).

The TM RLC entity is defined for the transmission of data in “one‐shot” like system information broadcast or paging messages through BCCH or PCCH LCHs, respectively, or SRB0 data through the CCCH LCH. According to its name, the TM RLC entity does not modify the data nor does it include any headers when submitting it to the lower layer. TM RLC entity is uni‐directional, i.e. it is configured either to transmit or receive data.

The UM RLC entity is defined for data services like voice or video streaming that do not require lossless delivery, i.e. packets are not acknowledged nor re‐transmitted by the UM RLC entity. The UM RLC entity handles only the segmentation function. The UM RLC entity is uni‐directional, i.e. it is configured either to transmit or receive data.

The AM RLC entity is defined for data services sensitive for any losses in the link like TCP protocol services. The AM RLC entity provides lossless delivery of upper layer data by means of the ARQ protocol through status reporting by the receiving entity and re‐transmissions by the transmitting entity. The transmitting entity may also poll the receiving entity to transmit a status report called polling. The AM RLC entity is bi‐directional, i.e. it can receive and transmit data and control information like status reports.

SRBs are always configured with AM RLC entity, while DRB may be configured with either AM RLC entity or one or two UM RLC entities. For instance, a video streaming service could only require an uni‐directional link in downlink direction and hence only one UM RLC entity, while voice service requires sound to be both transmitted and received requiring a bi‐directional link and hence two UM RLC entities for a DRB.

3.5.3.1 Segmentation

When MAC multiplexes data from one or multiple LCHs, the purpose of RLC segmentation is to ensure data can be multiplexed into the MAC PDU even though the grant size is insufficient to carry the whole PDCP PDU. Also, when multiple PDCP PDUs are multiplexed within the MAC PDU, for the remainder of the grant, one of the RLC entities segments a PDCP PDU to maximize transmission of data. This is also an enabler to support the wide range of data rates in the NR system. The receiving RLC entity is responsible for reassembling the RLC SDUs and PDCP PDUs from the received segments based on their sequence number (SN) and segmentation information carried in the RLC header.

Offset‐based segmentation is always used by the RLC entity. That is, whenever a segment is created from an RLC SDU, offset information about the start position of the RLC SDU segment in bytes within the original RLC SDU is indicated to the receiving RLC entity in the RLC PDU header. Hence, whenever the first segment of the RLC SDU is transmitted, offset information is not included as it would naturally show only a field of zeros. This required the RLC PDU header being able to indicate whether the segment is the first segment, middle, or the last segment of the RLC SDU, however, allocating 2 bits for this purpose is justifiable given it reduces RLC header overhead by 16 bits for every segmented RLC SDU. Furthermore, this enables the RLC PDU header size to be known in advance in the transmitter without the knowledge of available grant size regardless of whether segmentation was to be performed or not providing better capabilities for pre‐processing in the transmitter (see chapter 3.5.3.3).

The segmentation of RLC SDU into RLC PDUs is illustrated in Figure 3.42.

Schematic displaying a rectangle labeled RLC SDU divided into 3 parts with dashed lines connecting to boxes at the bottom representing the first segment (left), middle segment (middle), and last segment (right).

Figure 3.42 RLC SDU segmentation into RLC PDUs.

Since the RLC protocol does not support concatenation, in contrast to LTE, the exposed header overhead increases when many RLC SDUs are multiplexed within one MAC PDU as each one of those form an own RLC PDU and hence require its own header with SN, segmentation information, etc. For the UM RLC entity, however, as the SN serves no other purpose than differentiating segments of different RLC SDUs, the overhead is reduced by indicating the SN in the RLC PDU header only for segmented RLC SDUs. Consequently, for full RLC SDUs the RLC PDU header contains only the indication for the receiver that the RLC PDU consists of a complete RLC SDU. By means of such a principle, at most one new SN will be allocated per RLC entity for each MAC PDU transmission compared to one SN per RLC SDU. Thus, the SN lengths can be smaller for the UM RLC entity compared to the AM RLC entity and header overhead is further reduced.

3.5.3.2 Error Correction Through ARQ

The ARQ protocol is the second main function of RLC. It ensures lossless delivery of upper layer data by means of polling, status reporting and re‐transmissions. Based on the SN and segmentation information in the RLC PDU header, the receiving RLC entity may track whether RLC SDUs or RLC SDU segments have been lost on the air interface.

Polling is performed by the transmitting RLC entity to trigger status reporting in the receiving RLC entity. By means of configured threshold values (number of transmitted RLC PDUs or bytes) or timer expiry, a poll bit is flagged in the RLC PDU header to ensure sufficient frequency in status reporting so that the transmit window can advance. It is notable that unless the transmitter gets feedback on a RLC SDU in the beginning of the transmit window and before the transmit window gets full (window size is half of the SN space), no new data can be transmitted so as not to confuse the receiver operation. Such window stalling limits the achievable throughput and hence polling is performed.

Status reporting is performed by the receiving RLC entity to inform the transmitting RLC entity about successfully received and missing RLC SDUs or RLC SDU segments. The missing SDUs or segments are indicated explicitly based on their SNs and offset information in the RLC Status PDU. At the same time, successfully received SDUs and segments are indicated implicitly by the absence of their SNs in the Status PDU by means of the highest status state variable. Only the highest SN is indicated explicitly in the Status PDU, which is neither indicated as missing nor successfully received. With such a structure all the SNs not explicitly indicated as missing below the highest status state variable are implicitly acknowledged to the transmitting RLC entity.

Re‐transmissions are performed by the transmitting RLC entity based on the received status reports from the receiving RLC entity. Re‐transmissions can be on a per full RLC SDU basis or per RLC SDU segment basis, i.e. the amount of re‐transmitted data can be minimized. The number of maximum re‐transmissions can be configured for the RLC entity, and in case such a threshold is reached, RLF is declared. Therefore, the AM RLC entity is also considered to be lossless. It attempts to transmit every byte until it is received successfully or RLF is detected. When RLF is detected, data loss happens also with the AM RLC entity as it is re‐established during the RRC re‐establishment procedure, hence, the error probability is not complete zero but still very small (10–8–10–6 depending on the scenario of interest).

3.5.3.3 Reduced RLC Functions for Efficient Processing

As discussed at the beginning of the chapter, compared to LTE, NR removed concatenation and re‐ordering/in‐order delivery functions from the RLC entity. The removal of concatenation provides better processing efficiency in the transmitter and avoiding the reordering improves processing in the receiver side.

When concatenation is not performed at RLC, the higher layer functions are decoupled from real‐time constraints in the transmitter. Each PDCP PDU submitted to the transmitting RLC entity can be pre‐processed to an RLC PDU without any information about the available grant size from the MAC layer. Hence, the ARQ protocol in the RLC entity could be decoupled from the real‐time processing constraints as the new RLC PDU creation as well as re‐transmission decisions can be made in advance before the grant reception. Only the possible segmentation needs to be performed in real‐time, however, this requires re‐encoding of one‐bit value in the RLC PDU header along with segmenting the data field, that is, the RLC PDU header size does never need to be changed because of segmentation leading to optimal pre‐processing capabilities.

Avoiding reordering/in‐order delivery at RLC allows immediate delivery of a received RLC SDU to the PDCP entity. As reordering support by the PDCP layer was required from the beginning to support dual‐connectivity, the removal of reordering from RLC was backed up to avoid duplicate functions in the stack. Nevertheless, the main reason for such a design is to allow stable loading of PDCP deciphering function as PDCP PDUs can be deciphered immediately when fully received in the RLC entity before the reordering happens. In LTE, as the reordering is performed by RLC, the PDCP entity receives a burst of PDCP PDUs simultaneously when missing RLC PDU releases the reordering buffer to PDCP. Such behavior leads to very high processing load peaks in the receiver and reduced energy efficiency, ciphering and de‐ciphering functions are one of the most energy hungry functions in the transmitter and receiver.

3.5.4 Packet Data Convergence Protocol (PDCP)

The number of functions at the PDCP layer, defined in [39], has increased in NR compared to LTE. Due to new architecture options in the radio network, namely split between distributed unit (DU) and centralized unit (CU), processing efficiency requirements in the UE, new services of 5G, and inherent support of dual connectivity, the new set of functions was justified. The PDCP is now solely in charge of reordering and in‐order delivery as these functions were removed from RLC as discussed in the previous section. It should be noted that the in‐order delivery can also be switched off by configuration, if the service in question does not require ordered delivery. Security is one of the main functions of PDCP, which includes ciphering and integrity protection of data. Integrity protection is always applied for SRBs like in LTE but can now be configured to be used with DRBs as well. Support of PDCP PDU duplication was introduced to provide more reliable and lower latency communication in support for URLLC services. Other PDCP functions inherited from LTE include: header compression and de‐compression using Robust Header Compression (RoHC) protocol, routing for split bearers, timer‐based SDU discard at the transmitter and duplicate discarding at the receiver side.

In dual connectivity operation the PDCP layer is in charge of routing data to different dual connectivity legs which each have their own UM or AM mode RLC entities. Thus, in the MAC BSR, also the PDCP buffer status is conveyed to the schedulers of both involved gNBs. PDCP data duplication can be used to improve reliability by using both dual connectivity links to transmit the same data. This can be especially useful for SRB signaling, e.g. in high mobility scenarios. As the same data is sent over the two links, the receiver may receive the same packet twice, hence, to avoid propagating this to upper layers, PDCP has the capability to discard any duplicates at the receiver.

3.5.4.1 Reordering

As discussed in the RLC chapter, the reordering and in‐order delivery functions are now performed by the PDCP layer mainly due to the native support of dual‐connectivity as well as to enable stable loading of the PDCP deciphering function. However, also the introduction of the NR network architecture option to split between DU and CU backed up such a decision, PDCP PDUs transferred over the F1‐interface can be received out‐of‐order in the CU and DU. Reordering in the RLC layer would be just pointless as out‐of‐order delivery could happen in the network already in the transmitter side (PDCP PDU transfer from CU to DU in DL direction).

The reordering window in the receiving PDCP entity is a push‐based window regardless of the RLC mode associated with the DRB, AM or UM. This means that each PDCP PDU received with SN out of the window is considered as an old PDU and is discarded by the receiving PDCP entity. The window is moved forward by receiving a PDCP PDU with the lowest SN still considered within the window (the next SN from the previously delivered SN of a PDCP SDU) or alternatively, in case the reordering timer expires. Thus, the window is kind of “pushed” forward from the low end. This is different to LTE, where the UM mode RLC entity is operating with pull‐based reordering window where each SN received out of the window is considered as new and becomes the high edge of the window. The low edge is hence “pulled” by the high edge. The NR principle is a bit more complicated for UM mode DRBs as no more than half of the PDCP SN shall be in flight at any point in time and there is no RLC level feedback to determine the lowest SN delivered. However, given the big SN spaces specified for PDCP (12 and 18 bits) and the level of simplification the single window enables for the receiver, i.e. PDCP does not need to care about the associated RLC mode, such a principle makes sense.

In support for a wide range of 5G services, it is observed that not all services require the reordering and in‐order delivery to operate. Consequently, NR specifies a possibility to configure out‐of‐order delivery for a DRB which is performed by the PDCP entity. That is, any PDCP data PDU received from RLC is delivered immediately to upper layers. However, it should be noted that the reordering window and receiver variables are similarly updated when in‐order delivery is required enabling reconfiguration between these two modes if necessary.

Additionally, to support lossless mobility, both reordering and in‐order delivery are needed in PDCP. This allows forwarding PDCP SDUs from the source gNB PDCP layer to the target gNB PDCP layer without knowing exactly which PDCP PDUs the UE has received correctly, i.e. they have not been acknowledged by the RLC layer, at the same time as new upper layer data is transmitted from core network to the target gNB. The target gNB may obtain the status of correctly received PDCP SDUs by the UE and continue the transmission immediately from any available PDCP SDU that is not yet received by the UE. This is done to avoid transmission of unnecessary duplicates which the UE has already received. The receiver in the UE is then able to reorder and deliver all PDCP SDUs to the application layer in‐order.

For QoS support, to avoid transmitting outdated data, timer based SDU discard is used in the transmitter to discard data in the transmission buffer. Given that any missing SN in the sequence will stall the reordering window in the receiver, using the timer‐based SDU discard should mainly be applied to SDUs without an assigned SN. It is left to UE and network implementations how to achieve such behavior enabling the best possible throughput.

3.5.4.2 Security

The ciphering at the PDCP layer is performed for all user plane traffic flows provided from SDAP layer as well as for RRC signaling. However, the integrity protection is applied always for SRBs carrying RRC signaling but can be configured for DRB in case UE supports it. The UE will indicate its capability for the aggregated data rate of user plane integrity protected data over all DRBs configured with integrity protection, lowest possible value for the data rate is as low as 64 kbps. Additionally, as all the signaling between UE and core network is transmitted inside RRC containers in the RRC signaling, the PDCP layer provides ciphering and integrity protection also for those messages.

The design of PDCP for security and integrity protection is such that different ciphering and integrity protection algorithms, defined in [40], can be adopted without modifications to the actual PDCP protocol if supported input parameters for ciphering and integrity protection are sufficient. Currently supported input parameters are:

  1. – COUNT: A value which is updated by the PDCP SN that are the least significant bits of the COUNT.
  2. – DIRECTION: Defining the direction of the transmission (uplink or downlink).
  3. – BEARER: The radio bearer identifier in [40]. The RRC provides this value and uses the value RB identity −1.
  4. – KEY: The ciphering and integrity keys for the control plane and for the user plane are KRRCenc, KUPenc, KRRCint, and KUPint.

For both, ciphering and integrity protection, the algorithm and the key that the PDCP entity shall use are configured by RRC and are PDCP entity specific. This allows that different bearers utilize different keys and algorithms. This enables Radio Access Network (RAN) architectures where PDCP entities are processed independently in different independent locations without utilizing the same key (see Chapter 4 on different architecture options).

The integrity protection is based on a standard method by including MAC‐I field into the PDCP PDU header field of data packets. The transmitter computes the value of the MAC‐I field before ciphering. At the receiver the integrity of a PDCP PDU is verified by calculating the X‐MAC based on the input parameters as specified above. If the calculated X‐MAC corresponds to the received MAC‐I, integrity protection is verified successfully.

The ciphering is applied only for the data part of a PDCP data PDU, however, SDAP header or SDAP control PDU included in PDCP SDU are not ciphered. This was done mainly for two reasons: the ROHC protocol running in the PDCP layer requires to jump over the SDAP header to be able to find the IP header of the PDCP SDU. This also allows implementations where the Quality of Service Flow Identifier (QFI) information is used for improved scheduling decisions also in the DU side of the network. The former is also required to stick with the fixed size of the SDAP header (1 byte) to allow PDCP just to blindly remove the SDAP header part of the given SDU.

The PDCP architecture on ciphering and integrity protection allows processing of the PDCP SDU independently of the actual data transmission time in the radio interface. At the UE side, this allows PDCP PDUs to be pre‐processed in memory waiting for uplink transmission opportunities. In the network this allows PDCP to be in a CU and a cloud‐based implementation.

3.5.4.3 Header Compression

The PDCP performs IP header compression and decompression using the ROHC protocol. Different ROHC profiles are supported by defining different profile identifiers for each supported ROHC profile as shown in Table 3.12. The actual profile definition, how to map a data flow to each profile is not defined by 3GPP but RFC 5795 [41] is referenced. As the ROHC profile may send independent control packets, called interspersed ROHC feedback, between ROHC decompressor and ROHC compressor, PDCP supports special PDCP control PDU types for such packets.

Table 3.12 Supported header compression protocols and profiles.

Profile Identifier Usage Reference
0x0000 No compression RFC 5795
0x0001 RTP/UDP/IP RFC 3095, RFC 4815
0x0002 UDP/IP RFC 3095, RFC 4815
0x0003 ESP/IP RFC 3095, RFC 4815
0x0004 IP RFC 3843, RFC 4815
0x0006 TCP/IP RFC 6846
0x0101 RTP/UDP/IP RFC 5225
0x0102 UDP/IP RFC 5225
0x0103 ESP/IP RFC 5225
0x0104 IP RFC 5225

3.5.4.4 Duplication

The PDCP duplication function allows PDCP data PDUs to be duplicated at the transmitting PDCP entity and sent over two associated RLC entities. Such a feature was introduced mainly to support URLLC‐based services by enabling the duplication for DRBs but also enhances, mobility robustness in high mobility scenarios by duplicating the SRB data. Thus, the duplication is configurable on a per DRB or SRB basis by RRC. Additionally, duplication for DRBs is controlled in MAC by means of a MAC CE, which can activate or deactivate duplication for a DRB configured with duplication.

The RLC entities associated with such a PDCP entity are called primary and secondary RLC entities. From the naming one can already determine that the primary RLC entity is always active regardless of the duplication activation status, i.e. the MAC control applies to the activation status of the secondary RLC entity. Hence, PDCP control PDUs, like the PDCP status report, that are not duplicated are always transmitted over the primary RLC entity. The RLC entities can either belong to the same cell group with Carrier Aggregation (CA) based duplication or to different cell groups with dual‐connectivity‐based duplication, hence, PDCP duplication with only one carrier is not possible. Whenever the same cell group is used, restrictions in MAC are put in place to guarantee that the two duplicates never end up on the same carrier. Otherwise, they might fail at the same time if multiplexed to the same MAC PDU, destroying the benefits of duplication.

To avoid unnecessary duplication overhead, when receiving ACK from either of the RLC entities for a certain PDCP PDU, the PDCP discard mechanism is used to discard the duplicated PDU from the buffer of the other RLC entity. This also enables the slower RLC entity to advance its transmission window in case the transmission of the old packets is not meaningful anymore.

3.5.5 Service Data Adaptation Protocol (SDAP)

SDAP serves as an interface between the core network (CN) and RAN and provides the key part of the QoS framework in NR. SDAP handles the mapping between QoS flows from a PDU session and DRBs. Hence, there is one SDAP entity configured for each PDU session and each DRB is only serving one SDAP entity, i.e. data from multiple PDU sessions are not mixed into the same DRB. Another key function of SDAP is the marking of each transmitted packet in both DL and UL direction with a QFI. The marking is used in the DL direction to enforce an implicit mapping of QoS flow to DRB by means of reflective QoS whereas it is used in the UL direction to enable the gNB to replicate the marking on the RAN‐CN interface for QoS enforcement in the CN (see Chapter 6).

The SDAP entity can be configured either with or without the SDAP header, the latter being known as “transparent mode.”(TM). The TM allows the removal of any SDAP introduced overhead, e.g. in case of DRBs for which the gNB does not plan to apply reflective QoS. Otherwise, the 1‐byte SDAP header is used, which consists of a 6‐bit QFI field and reflective QoS indicators in DL direction (see Section 3.5.5.1) as well as a 6‐bit QFI field and control PDU indicator in UL direction (see Section 3.5.5.2). The 1‐byte fixed header size is designed to support efficient processing and ease the PDCP ciphering function, which does not cipher a SDAP header as discussed before.

3.5.5.1 Mapping of QoS Flows to Data Radio Bearer

The mapping between QoS flows and DRBs is controlled by the gNB for both DL and UL directions, whereas the CN configures the UE with IP flow to QoS flow mapping rules as discussed in Chapter 6. Each QoS flow is assigned a QFI and each QFI is assigned a QoS profile by the CN, which is provided to the gNB upon PDU session establishment. The QoS profile defines the required QoS characteristics for a certain QoS flow. Based on the QoS profile, the gNB knows how to treat each individual QoS flow and determines whether and which kind of different DRBs are needed for the PDU session. Finally, the gNB provides the mapping rules to the UE for the QoS flow to DRB mapping in UL direction via explicit RRC signaling or through implicit mapping via reflective QoS.

As not all QoS flows are active at the same time and as there is the possibility of having up to 64 QFIs for a given PDU session, configuring immediately all QFI to DRB mapping rules would generate too much signaling overhead and would slow down the PDU session setup. For this purpose, the concept of a default DRB is introduced in UL direction where the UE maps all QFIs for which no mapping rule has been provided by the gNB. Thus, each PDU session comes with at least the default DRB. Dedicated DRBs can be established on need basis when new QFIs emerge in the communication link. Nevertheless, it should be noted that it is an option for the gNB to configure all the mapping rules immediately during the PDU session setup in which case no default DRB needs to be established.

Reflective Quality of Service (RQoS) allows implicit update of the QFI to DRB mapping rule in the user plane. Whenever a new QFI appears in the DL (or the first DL packet for a given QFI is buffered), the gNB maps the packet into the desired DRB (existing one or newly added) and transmits this to the UE with Reflective QoS flow to DRB mapping Indication (RDI) bit flagged. By means of the flagged RDI bit, the UE determines that the QFI to DRB mapping rule need to be updated by the given packet and reads the QFI field in the SDAP header. The QFI is then implicitly mapped to the corresponding UL part of the DRB whenever a packet with such QFI is transmitted by the UE. The SDAP header carries also the NAS level RQI (Reflective Quality of Service Indication) bit in DL packets as the CN does not apply any headers to the packet that will be carried over the air. The RQoS is especially efficient in mapping new QFIs to existing DRBs as no explicit RRC signaling is required to configure the UE with new mapping rules.

3.5.5.2 QoS Flow Remapping between Data Radio Bearer

Due to UE mobility or any other reason determined in the network, the gNB may trigger QoS flow remapping. One common scenario to remap a QoS flow is also to move it from the default DRB to a dedicated DRB. As for any other mobility case, lossless and ordered delivery should be able to be enforced for the given QoS flow, and since such remapping involves two DRBs and PDCP entities, this is performed by the SDAP entity. Due to the additional buffering, the in‐order delivery necessitates (data in the new DRB needs to be buffered if data remains in the old DRB) the QoS flow remapping design in a way that the additional buffering requirement only applies to the gNB. Hence, in the DL direction it is left for network implementation to ensure the old DRB is “emptied” from the packets of the remapped QoS flow before transmitting them in the new DRB. In the UL, the UE transmits an SDAP control PDU with a control bit flagged and the QFI after the last packet of the remapped QoS flow via the old DRB which serves as an end marker. Based on the end marker, the gNB can release buffered packets of the remapped QoS flow to upper layers, maintaining the in‐order delivery.

3.5.6 Radio Resource Control (RRC)

For an efficient radio communication system, controlling available radio resources, maintaining radio connections and allocating available resources to connections based on the end user QoS requirements are vital operations. In NR, the RRC protocol, defined in [42], is providing necessary configuration signaling and procedures for these operations. RRC has following functions:

  1. – Broadcast of system information,
  2. – RRC connection control,
  3. – Measurement configuration and reporting,
  4. – Inter‐radio access technology (RAT) mobility,
  5. – Set of other functions including transfer of dedicated NAS and UE radio access capability information.

The RRC protocol defines these functions and corresponding signaling procedures including UE actions. However, the protocol defines only a limited set of procedures for the network, i.e. how the gNB shall operate. The majority of the gNB operations, how and when network utilizes different signaling procedures, is left to the RRM function of the radio network, which is implementation dependent.

Through the system information broadcast, RRM controls the transmission and transmitted parameters in MIB, SIB type 1 (SIB1) and other SIB types. The MIB is transmitted in PBCH of the SS block, and SIB1 as part of the RMSI as discussed in Section 3.3.2. The RRC protocol defines the UE actions and procedures for acquiring MIB, SIB1, and other SIBs from downlink broadcast messages. In addition to acquiring system information from broadcast messages, NR supports also the UE initiated system information request procedure for other SIB types.

This system information acquisition process is part of the cell selection and cell resection process needed for Idle mode mobility, see Section 5.5. Based on the received MIB, the UE can continue reception of OSI for detecting parameters or deciding whether a detected cell is suitable. The UE determines cell quality based on RSRP measurement and parameters provided in SIB1 and other SIBs. In case UE has detected multiple cells fulfilling cell quality requirements, the UE ranks all detected cells based on cell ranking criteria. During cell ranking the UE orders all suitable cell parameters provided in SIBs and selects the highest ranked cell. The RRM of the network can control both cell quality criteria as well as cell ranking by a cell specific offset and priorities, so that the UE reselects the most appropriate cell for Idle mode camping. Therefore, the RRM has sophisticated means to control Idle mode mobility of the UE by appropriate parameter settings in SIBs. In addition, by system information broadcast the network can configure a cell to be barred, control or forbid the access for a set of subscribers or a set of service types.

The RRC connection control defines UE actions when the network is controlling the radio connection between UE and gNB. It defines how the network can contact the UE in Idle mode or RRC Inactive state by paging, see Section 5.5.2, and controls the RRC connection establishment procedure and initial security activation.

The RRC connection reconfiguration procedure is used to configure the PHY functions, and all radio protocols for active signaling and/or data transfer between UE and gNB. In addition, it is used for RRC state management between RRC CONNECTED and RRC INACTIVE state. The RRC state management principles are discussed in Section 5.5.2.

In RRC CONNECTED state, the UE has SRBs established for dedicated RRC signaling, utilizing error free delivery provided by AM‐RLC. Additionally, DRB are configured based on service needs to transfer user plane data. For both SRBs and DRBs the used configuration is signaled by using the RRC protocol. In RRC CONNECTED state the radio link exists between UE and gNB or multiple gNBs in case of dual‐connectivity, and therefore PHY features supporting efficient data transmission on PDSCH and PUSCH are enabled. The network can configure the UE using RRC signaling toward a preferred PHY configuration, including channel state information (CSI) measurement configuration and reporting, as well as preferred transmission and reception configuration.

The configuration of PHY features and transmission modes needs to be according to UE capabilities as the network shall avoid configurations that are not supported by the UE. The network learns the UE capabilities from the UE capability transfer procedure supported by RRC. In addition, the RRC protocol supports storing UE capabilities in the network avoiding UE capability transfer procedure at every RRC IDLE to RRC CONNECTED state transition.

In RRC CONNECTED state the connection control maintains the radio link by using network‐controlled intra system handovers and beam management functions performed by MAC. To perform these functions a set of measurement reports is needed in RRM. The RRC measurement configuration and reporting function is used to configure UE measurement objects, measurement quantities as well as reporting conditions in both MAC and RRC layer of the UE. The RRC connection control defines the criteria when the UE considers the radio link failed and UE actions for radio link recovery.

The RRC protocol is also used for control mobility to other RAT such as LTE. The radio network's RRM can configure appropriate measurements and reporting criteria to the UE regarding another RAT, e.g. LTE or UMTS. The UE shall perform measurement on the other RAT and reporting based on reporting criteria. Based on received measurement reports RRM can initiate handover toward the target RAT using RRC. The inter‐RAT handover message can include the physical and radio protocol layers configuration of the target RAT to the UE. Similarly, when the gNB receives a request on incoming handover from another RAT, RRC signaling is used to provide radio configuration to be used in NR at handover from the other RAT. The other RAT forwards received radio configuration parameters as transparent container to the UE in the handover command.

Finally, the RRC protocol contains a set of other functions including transfer of dedicated NAS information. For dedicated NAS information messages, the RRC signaling defines transparent containers, so that NAS messages are not processed by the RRC protocol. This provides robust means to transfer NAS messages in sequence and avoids race conditions between NAS and RRC state machines in mobility and state functions.

The RRC signaling is implemented by using Abstract Syntax Notation Number 1 (ASN.1) encoding, supporting flexible extendable message definitions. It can be expected that the RRC protocol will be extended by many new parameters and procedures in coming NR releases. These new parameters and procedures are needed to allow configuration of new PHY or radio protocol features, as well as to extend the capabilities of the NR radio system and RRM of the radio network.

3.6 Mobile Broadband

3.6.1 Introduction

Broadband data communication was originally available via wired connections by utilizing Ethernet and ADSL during the 1990s and even the early years of the twenty‐first century. The first steps of wireless broadband communication were taken with WLANs radio generations, which were simply meant to adopt LAN communication to wireless connections. The most dominant solutions were based on IEEE 802.11b and IEEE 802.11g, also known was Wi‐Fi, used to detach computers from the cable connection allowing limited mobility to the user. At the same time people started to utilize their first smartphones to read emails and synchronize calendars over cellular connections such as GPRS and later UMTS. Wi‐Fi provided better broadband type data connections in terms of bitrate and latency and allowed to use normal web‐browsers, but was hardly mobile as WLAN connectivity was only available in limited areas like at home, in offices or airports (still today most of the data traffic worldwide goes via WLAN). However, several manual steps are needed when entering a new Wi‐Fi network.

Smartphones had limited processing capabilities, and their screens were small with low resolution. The available cellular data connections provided modest data rates and long latencies, thus those connections where there were hardly broadband connections even though providing support for wide area coverage and mobile users. The user experience and device (laptops and smartphones) usage were quite different compared to wired connections. Different service providers developed web‐pages optimized for mobile phones, or services where provided over i‐mode or WAP protocol.

During that time, it was apparent that these different environments will merge, however, it was unclear when this will happen and which technology components will enable this change. From a communication technology point of view, the following evolution steps can be identified:

  1. – Introduction of HSPA, i.e. 3GPP Release 5 in 2003 (see [43]) and Release 6 in 2006 (see [44]) and compatible cellular networks and mobile devices.
  2. – Introduction of IEEE 802.11n in 2009 (see [45]) and Wi‐Fi Alliance certified devices and access points (APs).
  3. – Introduction of LTE Release 8 in 2008 (see [46])and LTE Release 10 for ITU‐R 4G submission in 2011 (see [47]) and compatible products.

From the devices perspective, significant changes were:

  1. – Introduction of the first iPhone in 2007 and iPhone 3G in 2008. The iPhone 3G supported 3G HSPA and the App Store for downloading third party applications.
  2. – Induction of the Android 1.5 (Cupcake) operating system for mobile devices in 2009.
  3. – Introduction of HTML‐5 during 2014.

From the devices perspective, one should not forget other significant technology improvements for processor and camera technologies for example. This development has introduced current MBB and services we know so well today, where main data traffic is due to video downloads and picture uploads to social media such as Facebook and Instagram. The popularity of these services is highly dependent on the availability of affordable mobile devices with sufficient memory, display, camera and processing capability, and the ability of and being always connected with high‐bandwidth and low latency cellular connection.

For the future, it can be anticipated that average traffic consumption is increasing, and it is even less depending on the user's location. Currently, the majority of video is consumed in buildings, which may seem self‐evident. However, one can anticipate and already see that habits will change toward video consumption being more dependent on available time than location of the user. Therefore, data consumption has and will increase when people are traveling in cars, buses, trains, and even planes requiring more efficient solutions to avoid high number of high‐speed UEs each individually connecting to cellular base stations.

3.6.2 Indoor Solutions

During the past decade, the de‐facto indoor wireless solution has been IEEE 802.11n‐based WLAN known as Wi‐Fi (see [18]). The previous generations of 802.11 are also used by legacy devices but the industry has now changed the focus on IEEE 802.11ac with even higher data rates. Development of IEEE 802.11 technologies is ongoing with IEEE 802.11ax and other standard amendments. During the 5G era, the IEEE 802.11 family is expected to remain a vital component of indoor solutions. The 802.11 technology is supported by an extremely high number of different device types, such as TV equipment, game consoles, printers, etc. operational at 2.4 GHz Industrial Scientific and Medical (ISM) band and increasingly also on the 5 GHz band as discussed in Chapter 2. These bands are very attractive for deploying privately owned indoor hot spots and networks.

For future solutions enabling cost‐efficient NR BS indoor deployments would provide significant benefits. This can be easily understood when looking at Figure 3.43 that illustrates wall penetration loss caused by different materials as a function of carrier frequency. The result is based on 3GPP channel model studies and agreed for simulations modeling, without taking any attenuation modeling inaccuracy terms into account (see [48]). The inaccuracy is modeled as normal distribution with zero mean and 4–5 dB standard deviation.

Graph of attenuation vs. frequency with 5 curves representing standard multi-panel glass, IRR glass, concrete, wood, etc., illustrating the wall penetration loss for O2I below 6GHz.

Figure 3.43 Wall penetration loss for O2I below 6GHz.

Figure 3.43 shows that modern infrared reflective, i.e. selective windows introduce very high 25–29 dB attenuation for all radio signals independently of the frequency. The concrete wall has linear increase from 12 to 34 dB when frequency is changed from ∼500 MHz to 6 GHz. Standard multi‐panel glass and wood have sufficiently flat attenuation in terms of frequency, but still significant 5–7.5 dB attenuation for glass and 9–10.5 dB for wood.

Even though buildings are not constructed by a single material, but are rather a combination of glass, concrete, and wooden parts, it is apparent that providing high quality signal from outside to inside buildings using concrete and infrared reflective glass is difficult even with relatively low frequencies, i.e. frequencies around 2 GHz. Naturally, such attenuation reduces the maximum available coverage and reliability of the network even with low data rate services such as voice. Operators will continue to provide indoor coverage as today by utilizing frequencies below 1 GHz and frequencies around 2 GHz and with high frequencies when possible, but available bandwidth on the low frequencies is limited to 5–20 MHz per band and operator. In some limited cases a single operator may have up to 40 MHz of low band available. Only at 3.5 GHz frequency operators will have the possibility for 80–100 MHz bandwidth or more on a single band, but there the penetration loss can be an additional 10 dB or more as show in Figure 3.43.

When considering mmWave frequencies, the situation is even more problematic as shown in Figure 3.44, presenting the wall attenuation in logarithmic scale (see [48]). It is apparent that for mmWave frequencies serving indoor users from outside is not really feasible. Even if the 5G base station would be able to utilize high transmission powers, the uplink would be very limited due to the end user's device maximum TX power. Additionally, even if the uplink connection could be maintained, the device power consumption would be significantly higher when communicating to the outside compared to communicating with 5G base stations or Wi‐Fi access points located inside the building.

Graph of attenuation vs. frequency with 5 curves representing standard multi-panel glass, IRR glass, concrete, wood, etc., illustrating the wall penetration loss for at mmWave frequencies.

Figure 3.44 Wall penetration loss for O2I at mmWave frequencies.

The wall attenuation results in other factors that need to be noted. The first one being that when base stations are located inside, the interference caused from inside to the outside system or vice versa is greatly reduced, making cells more isolated and thus suffering less from interference of other cells. Secondly, different parts of the indoor networks can also experience higher isolation, even if the interior walls might not be concrete, interior floors and bearing walls will still attenuate signals significantly. Therefore, when utilizing higher frequencies, more 5G base stations are required to cover single and separated floors, while the interference of base stations with each other is less than in lower frequencies.

To be able to install indoor base stations, one needs to consider at least following points:

  1. – What is the available wired backhauling to the building?
  2. – How can the base‐stations be installed?
  3. – How to provide access and authorization to the network?

When the building has an existing fiber connection and internal Ethernet cabling, the first requirement is solved easily as now multiple 5G base stations or Wi‐Fi access points can be installed in different locations where Ethernet is available, or different base stations can share the fiber connections. However, this can be a rare case in many cities with older houses and even historical city centers. Existing twisted pair copper lines originally installed for PSTN telephony will easily become a limiting factor and replacing cabling to the building with fiber becomes too expensive. In such cases, fixed wireless type solutions as discussed in Section 3.1.2 as the use case of 5G‐TF, becomes of interest.

Notably, the solution can utilize 5G technology, including both below 6 GHz and mmWave frequencies to provide connection from 5G base stations to separate CPE. CPEs can have external antennas mounted on outside walls of the building preferable with LOS conditions to the base station(s). The motivation for using both below 6 GHz and mmWave frequencies allows utilization of mmWave frequencies to those CPEs in LOS condition, and e.g. bands n77, n78, or n79 (see Chapter 2) for those CPEs that are in NLOS or momentarily blocked by moving obstacles. Such combination of different frequencies has been shown to provide high system capacity as mmWave frequency is used opportunistically for high quality LOS links, leaving more bandwidth on lower frequencies (n77, n78, or n79) bands available for devices in NLOS conditions, while avoiding the need of extensive mmWave frequency resources allocation to these users.

A short cable through the wall or window is used to connect the external unit to the indoor CPE, which can then operate as Wi‐Fi AP or 5G gNB providing connection to the building's internal Ethernet network. In cases where internal cabling is not available, repeaters or perhaps self‐backhauling solutions are needed to have full coverage in any large home or office building with concrete walls or floors.

The second question becomes relevant when considering, how installation and maintenance can be done in individual houses and small offices. Today telecom operators are owning the base station infrastructure, including antennas cabling and the base station physical cabinet. Additionally, a preferred location of the base station is a public location where operator has own secured access. Alternatively, base stations are installed in basements or other technical spaces of a block house with cabling to the roof of the house. In such cases, an operator needs to have leasing deals with property owners.

However, it can be anticipated that such base station installation models are not economically feasible to address small offices and individual houses and new complementing business models are needed to address these installations. Additionally, ease of installation, configuration and operation needs to be designed for consumers rather than telecommunication operators with trained installation and maintenance personnel. Naturally, a consumer product aimed to serve a small number of users must have a significantly lower equipment price, energy consumption, cost and complexity of installation than the operator‐installed macro base station serving hundreds or even thousands of users.

The third question is related to who can control the access to the BS installed in an individual home or office. Preferably, the access and authorization to the network in operator provided networks would be based on existing cellular network authentication solutions utilizing the universal subscriber identity module (USIM). Similarly, for operator hosted WLAN the USIM can be used but when the network provider is the property owner this becomes typically much more cumbersome. There might be users with different operators' subscriptions, thus solutions should support multiple operators. Additionally, home or property owners will have their own preferences on how to control network access and granting access to the local area network for accessing local services, such as printers, media servers, etc.

Today cellular standard‐based home or office networks hardly exist, and it must be seen if those will become available in the early 5G time frame. In the Wi‐Fi solution space, the most common method is to provide network ID and password manually for, e.g. hotel customers at check‐in, or to configure the Wi‐Fi access profile at the device for office access. Providing passwords manually is still very cumbersome and not really providing any additional security, as passwords are not individual and updates are infrequent. In many public places such as shopping malls and airports, the solution is simply to allow limited access to the network without authentication by accepting the terms of use.

It is clear, that commonly used Wi‐Fi authentication solutions are not going to be sufficient in the future. The requirement for easy installation, configuration and use is not only a technical, but also an economical imperative and the related legal responsibilities need to be addressed in the market place.

However, when these issues are addressed, 5G, with extreme wide channel bandwidths, high data rates and low radio interface latency, can clearly be revolutionary in terms of providing MBB connectivity indoors, nicely coupled to mobile outdoor solutions, so that MBB service is not dependent on place or mobility.

3.6.3 Outdoor‐Urban Areas

For urban and dense urban areas, it is evident that initial 5G deployments will reuse existing cell sites currently utilized for LTE and 3G as well as the remaining 2G deployments. The utilization of these sites can be quite straightforward; however, limitations may arise in terms of practical installation of antennas, sufficient backhaul capacity, etc.

For initial deployments in urban and dense urban areas operators address two alternative routes, depending on their overall technology strategy and investment timeline. The first option is to utilize existing LTE networks as the coverage and connectivity layer for 5G connections and couple new higher frequency bands, mainly n77, n78, or n79, with the LTE coverage layer. This solution relays on the dual‐connectivity (DC) architecture between LTE and NR where LTE operates as the master of the connection and NR is used as a capacity booster when available. The benefit of this is that no full coverage is needed for an initial NR deployment in the first place, and typically there is no need to initially refarm any spectrum resources from earlier systems to NR. This is enabled by the DC solution, allowing data connections to be continued without interruption at LTE side when NR coverage is lost without the need of inter system handovers. In addition, the operator may also utilize high mmWave frequencies, bands n257, n258, n260, or n261, in dense urban deployments or for fixed wireless services together with LTE.

The obvious drawback of this approach is that when connection to 5G is lost, the service is downgraded to LTE service quality (smaller data rates and higher latency). Additionally, the new core network architecture and features are not available and therefore system operation is based on the existing LTE network. Therefore, this option is seen as a transition solution toward a standalone NR operation with NR radio network and new core network infrastructure.

The second approach for 5G deployments, is to deploy a standalone NR solution. Such deployments in urban and dense urban areas would benefit from initial usage of 1 GHz frequencies to obtain full coverage, including indoor coverage by using existing cell sites. As in the initial phase, the number of NR capable devices will be small compared to LTE devices, operators need to carefully balance 1 GHz spectrum usage between LTE and 5G. In addition to sub 1 GHz network deployments, additional deployments on n77, n78, or n79, similarly to the first approach are needed to achieve data rates that differentiate 5G from LTE and other earlier generations. The benefit of this approach is the possibility to utilize the new core network architecture and services immediately during 5G launch. In addition, the utilization of low frequencies in the first place, will provide more unique NR coverage for the operator in urban areas, and finally no transition to standalone 5G operation is needed. The drawback of this approach is the needed spectrum balancing between LTE and NR as well as the number of sites to be upgraded to NR needs in initial phase, as unnecessary frequent mobility between different RATs will have negative impacts to the end user service experience.

Notably, in both strategies the bands n77, n78, and n79 are vital for delivering the promise of eMBB in NR. Simply these bands each have a total spectrum of 500 MHz or more, and can provide continuous system bandwidth of 100 MHz in an operator's network. Such amount of spectrum is a significant boost to any operator's frequency assets, and it also matches nicely to the first NR UE capabilities supporting 100 MHz RX bandwidth for eMBB on bands below 6 GHz.

Especially in a dense urban environment, the existing sites may already provide full coverage by utilizing n77, n78, and n79 5G bands alone, without sub 1 GHz coverage layer. This can be achieved, especially outdoor and less attenuated indoor locations, in cities with small ISDs. The NR standard provides several tools for improving NR coverage compared to LTE. These solutions, such as downlink control channel beamforming by utilizing mMIMO, are band agnostic and can play an interesting role on bands n77, n78, and n79. Additionally, the concept of SUL was introduced by utilizing a carrier aggregation framework. In this case, the connection can use bands n80 – n84 or n86, located on sub 1 GHz or below 2 GHz for uplink transmission but maintain downlink transmission on higher wideband carriers such as n77. This solution can relief the uplink from coverage problems, which is often the limiting factor due to lower transmission power. However, as uplink bandwidth on these SUL bands is limited, it is expected that utilization of SUL is a short‐term solution and not widely used. A more straightforward solution would be to aggregate the higher frequency band carriers with an available lower frequency carrier and get additional benefits of the two carriers also for the downlink.

References

All 3GPP specifications can be found under http://www.3gpp.org/ftp/Specs/latest. The acronym “TS” stands for Technical Specification, “TR” for Technical Report.

  1. 1 3GPP TS 25.306: “UE Radio Access capabilities”.
  2. 2 3GPP TS 25.993: “Typical examples of Radio Access Bearers (RABs) and Radio Bearers (RBs) supported by Universal Terrestrial Radio Access (UTRA)”
  3. 3 Mogensen, P., Pajukoski, K., Tiirola, E., et al.: “Centimeter‐wave concept for 5G ultra‐dense small cells”, IEEE VTC2014.
  4. 4 Levanen, T., Pirskanen, J., Koskela, T., et al.: “Low Latency Radio Interface For 5G Flexible TDD Local Area Communications”, IEEE ICC2014.
  5. 5 5G Test Network Finland (5GTNF): http://5gtnf.fi/overview.
  6. 6 Parkvall, S., Furuskog, J., Kishiyama, Y., et al.: “A trial system for 5G wireless Access”, VTC2015.
  7. 7 5G‐SIG (Special Interest Group), KT will provide world's first 5G Experience www‐page at https://corp.kt.com/eng/html/biz/services/sig.html.
  8. 8 Verizon 5G Technical Forum Web page at http://www.5gtf.net.
  9. 9 KT PyeongChang 5G Special Interest Group, TS 5G.300: KT 5th Generation Radio Access; Overall Description.
  10. 10 Vakilian, V., Wild, T., Schaich, F, et al.: “Universal‐filtered multi‐carrier technique for wireless systems beyond LTE,” 2013 IEEE Globecom Workshops (GC Wkshps), Atlanta, GA, 2013, pp. 223–228.
  11. 11 Ahmed, R., Wild, T., Schaich, F., et al.: “Coexistence of UF‐OFDM and CPOFDM,” in IEEE 83rd Vehicular Technology Conference (VTC Spring), 2016, pp. 1–5.
  12. 12 Zhang, X., Jia, M., Chen, L., et al.: “Filtered‐OFDM – Enabler for flexible waveform in the 5th generation cellular networks,” in 2015 IEEE Global Communications Conference (GLOBECOM), 2015, pp. 1–6.
  13. 13 3GPP TR 38.802: “Study on New Radio Access Technology Physical Layer Aspects”.
  14. 14 Levanen, T., Pirskanen, J., Pajukoski, K., et al.: “Transparent Tx and Rx Waveform Processing for 5G New Radio Mobile Communications”, accepted to IEEE Wireless Communications Magazine.
  15. 15 3GPP R1‐1609568 “Out of band emissions, in‐band emissions and EVM requirement considerations for NR”, Nokia, Alcatel‐Lucent Shanghai Bell, 3GPP TSG‐RAN WG1 #86bis Lisbon, Portugal, October 10–14, 2016.
  16. 16 3GPP R1‐1610113: “Coverage analysis of DFT‐s‐OFDM and OFDM with low PAPR”, Qualcomm Incorporated, October 10–14, 2016 October 10–14, 2016.
  17. 17 3GPP TS 38.104: “NR; Base Station (BS) radio transmission and reception”.
  18. 18 IEEE Standard for Information technology: “IEEE802.11–2016, Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications”.
  19. 19 3GPP TS 38.211: “NR; Physical channels and modulation”.
  20. 20 Mohsenin, T., Truong, D.N., Baas, B.M. (2010). A low‐complexity message‐passing algorithm for reduced routing congestion in LDPC decoders. IEEE Transactions on Circuits and Systems I 57 (5): 1048–1061.
  21. 21 Zhang, Z., Anantharam, V., Wainwright, M.J. et al. (2010). An efficient 10GBASE‐T Ethernet LDPC decoder design with low error floors. IEEE Journal of Solid‐State Circuits 45 (4): 843–855.
  22. 22 Mohsenin, T., Shirani‐mehr, H., Baas, B.M. (2013). LDPC decoder with an adaptive word width data path for energy and BER co‐optimization. VLSI Design 2013 (1): 1–14.
  23. 23 Li, M., Naessens, F., Debacker, P., et al.: “An area and energy efficient half row‐paralleled layer LDPC decoder for the 802.11ad standard”, in Proc. IEEE Workshop on Signal Processing Systems (SiPS'13), Taipei City, pp. 112–117, Oct. 2013.
  24. 24 Yin, B., Wu, M., Wang, G., et al.: “A 3.8 Gb/s large‐scale MIMO detector for 3GPP LTE‐advanced”, in Proc. IEEE ICASSP, Florence, Italy, May 2014.
  25. 25 Li, A., Xiang, L., Chen, T. et al. (2016). VLSI implementation of fully‐parallel LTE turbo decoders. IEEE Access 4: 323–346.
  26. 26 Dizdar, O. and Arıkan, E. (2014). A high‐throughput energy‐efficient implementation of successive‐cancellation decoder for polar codes using combinational logic. IEEE Transactions on Circuits and Systems I: Regular Papers, vol. abs/1412.3829.
  27. 27 Park, Y.S., Tao, Y., Sun, S., et al.: “A 4.68Gb/s belief propagation polar decoder with bit‐splitting register file”, in Symp. on VLSI Circuits Digest of Technical Papers, pp. 1–2 Jun. 2014.
  28. 28 Giard, P., Sarkis, G., Thibeault, C., et al.: “Unrolled polar decoders, Part I: Hardware Architectures”, May 2015 [online]. Available at http://arxiv.org/pdf/1505.01459.pdf.
  29. 29 Lee, X.R., Chen, C.L., Chang, H.C., et al.: “A 7.92 Gb/s 437.2 mW stochastic LDPC decoder chip for IEEE 802.15. 3c applications”, IEEE Transactions on Circuits and Systems I: Regular Papers 62.2 (2015): 507–516.
  30. 30 Weiner, M., Blagojevic, M., Skotnikov S., et al.: “27.7 A scalable 1.5‐to‐6Gb/s 6.2‐to‐38.1 mW LDPC decoder for 60GHz wireless networks in 28nm UTBB FDSOI”, 2014 IEEE International Solid‐State Circuits Conference Digest of Technical Papers (ISSCC), 2014.
  31. 31 Ajaz, S. and Lee, H.: “Multi‐Gb/s multi‐mode LDPC decoder architecture for IEEE 802.11ad standard”, IEEE Asia Pacific Conference on in Circuits and Systems (APCCAS), pp. 153–156, Nov. 2014.
  32. 32 Zhang, K., Huang, X., Wang, Z.: “A high‐throughput LDPC decoder architecture with rate compatibility”, IEEE Transactions on Circuits and Systems I: Regular Papers 58.4 (2011): 839–847.
  33. 33 Belfanti, S., Roth, C., Gautschi, M., et al.: “A 1Gbps LTE‐advanced turbo‐decoder ASIC in 65nm CMOS”, IEEE Symposium on VLSI Circuits (VLSIC), 2013.
  34. 34 Roth, C., Belfanti, S., Benkeser, C. et al. (Jun. 2014). Efficient parallel turbo‐decoding for high‐throughput wireless systems. IEEE Transactions on Circuits and Systems I: Regular Papers 61 (6): 1824–1835.
  35. 35 3GPP TS 38.212: “NR; Multiplexing and channel coding”.
  36. 36 3GPP TS 38.300: “NR; NR and NG‐RAN Overall Description; Stage 2”.
  37. 37 3GPP TS 38.321: “NR; Medium Access Control (MAC) protocol specification”.
  38. 38 3GPP TS 38.322: “NR; Radio Link Control (RLC) protocol specification”.
  39. 39 3GPP TS 38.323: “NR; Packet Data Convergence Protocol (PDCP) specification”.
  40. 40 3GPP TS 33.501: “Security architecture and procedures for 5G system”.
  41. 41 IETF RFC 5795: “The RObust Header Compression (ROHC) Framework”.
  42. 42 3GPP TS 38.331: “NR; Radio Resource Control (RRC) protocol specification”.
  43. 43 Overview of 3GPP Release 5, Summary of all Release 5 Features. ETSI Mobile Competence Centre, Version 9th September 2003.
  44. 44 (2006). Overview of 3GPP Release 6, Summary of all Release 6 Features, Version TSG #33. ETSI Mobile Competence Centre.
  45. 45 IEEE 802.11‐09/0991r0, TGn Closing Report, Bruce Kraemer Marvell, 2009.
  46. 46 (2008). Overview of 3GPP Release 8. ETSI Mobile Competence Centre.
  47. 47 (2011). Overview of 3GPP Release 10. ETSI Mobile Competence Centre.
  48. 48 3GPP TR 38.901: “Study on channel model for frequencies from 0.5 to 100 GHz”.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset