Tracing LTE Development: Evolution of packet switched services in mobile networks

Alex Wanda
0
To understand LTE it is necessary to look back at its predecessors and follow its path of evolution for packet switched services in mobile networks.

After the year 2000...................

The first stage of the General Packet Radio Service (GPRS), that is often referred to as the 2.5G network, was deployed in live networks.


“A system that offered a model of how radio resources (in this case, GSM time slots) that had not been used by Circuit Switched (CS) voice calls could be used for data transmission and, hence, profitability of the network could be enhanced.”



.................“In contrast to the GSM CS calls that had a Dedicated Traffic Channel (DTCH) assigned on the radio interface, the PS data had no access to dedicated radio resources and PS signaling, and the payload was transmitted in unidirectional Temporary Block Flows (TBFs) as shown above”............

.........................“These TBFs were short and the size of data blocks was small due to the fact that the blocks must fit the transported data into the frame structure of a 52-multiframe, which is the GSM radio transmission format on the physical layer. Larger Logical Link Control (LLC) frames that contain already segmented IP packets needed to be segmented into smaller Radio Link Control (RLC) blocks.”...........................

Then toward the core network in 2.5G GPRS the Gb interface was used to transport the IP payload as well as GPRS Mobility Management/Session Management (GMM/SM) signaling messages and short messages (Short Message Service, SMS) between SGSN and the PCU (Packet Control Unit) – as shown below.
All in all, the multiple segmentation/reassembly of IP payload frames generated a fair overhead of transport header information that limits the chargeable data throughput. In addition, the availability of radio resources for PS data transport has not been guaranteed. So this system was only designed for non-real-time services like web-browsing or e-mail.



To overcome these limitations the standards organizations proposed a set of enhancements that led to the parallel development of UMTS and EGPRS (Enhanced GPRS) standards. The most successful EGPRS standard that is found today in operators’ networks is the EDGE standard. From the American Code Division Multiple Access (CDMA) technology family another branch of evolution led to the CDMA2000 standards (defined by the 3GGP2 standards organization).

In comparison to GSM/GPRS, the EGPRS technology also offered a more efficient retransmission of erroneous data blocks, mostly with a lower MCS (Modulation and Coding Scheme) than the one used previously. The retransmitted data also does not need to be sent in separate data blocks, but can be appended piece by piece to present regular data frames. This highly sophisticated error correction method, which is unique for EGPRS, is called Incremental Redundancy or Automatic Repeat Request (ARQ) II and is another reason why higher data transmission rates can be reached using EGPRS.

Since these early days two key parameters have driven the evolution of packet services further toward LTE: higher data rates and shorter latency. EGPRS (or EDGE) focused mostly on higher bit rates, but did not include any latency requirements or algorithms to guarantee a defined Quality of Service (QoS) in early standardization releases. Meanwhile, in parallel to the development of UMTS standards, important enhancements to EDGE have been defined that allow pre-emption of radio resources for packet services and control of QoS. Due to its easy integration in existing GSM networks, EDGE is widely deployed today in cellular networks and is expected to coexist with LTE on the long haul.

Nevertheless, the first standard that promised complete control of QoS was UMTS Release 99. In contrast to the TBFs of (E)GPRS, the user is assigned dedicated radio resources for PS data that are permanently available through a radio connection. These resources are called bearers. In Release 99, when a PDP (Packet Data Protocol) context is activated the UE is ordered by the RNC (Radio Network Controller) to enter the Radio Resource Control (RRC) CELL_DCH state. Dedicated resources are assigned by the Serving Radio Network Controller (SRNC): these are the dedicated physical channels established on the radio interface. Those channels are used for transmission of both IP payload and RRC signaling – As shown below. RRC signaling includes the exchange of Non-Access Stratum (NAS) messages between the UE and SGSN.



However, in Release 99 the maximum possible bit rate is still limited to 384 kbps for a single connection and, more dramatically, the number of users per cell that can be served by this highest possible bit rate is very limited (only four simultaneous 384 kbps connections per cell are possible on the DL due to the shortness of DL spreading codes). To increase the maximum possible bit rate per cell as well as for the individual user, HSPA was defined in Releases 5 and 6 of 3GPP.

In High-Speed Downlink Packet Access (HSDPA) the High-Speed Downlink Shared Channel (HSDSCH) which bundles several High-Speed Physical Downlink Shared Channels (HS-PDSCHs) is used by several UEs simultaneously – that is why it is called a shared channel. A single UE using HSDPA works in the RRC CELL_DCH state. For DL payload transport the HSDSCH is used, that is, mapped onto the HS-PDSCH. The UL IP payload is still transferred using a dedicated physical data channel (and appropriate Iub transport bearer); in addition, the RRC signaling is exchanged between the UE and RNC using the dedicated channels as shown below;



All these channels have to be set up and (re)configured during the call. In all these cases both parties of the radio connection, cell and UE, have to be informed about the required changes. While communication between NodeB (cell) and CRNC (Controlling Radio NetworkController) uses NBAP (Node B Application Part), the connection between the UE and SRNC (physically the same RNC unit, but different protocol entity) uses the RRC protocol. The big advantage of using a shared channel is higher efficiency in the usage of available radio resources. There is no limitation due to the availability of codes and the individual data rate assigned to a UE can be adjusted quicker to the real needs. The only limitation is the availability of processing resources (represented by channel card elements) and buffer memory in the base station.


In 3G networks the benefits of an Uplink Shared Channel (UL-SCH) have not yet been introduced due to the need for UL power control, that is, a basic constraint of Wideband CDMA (WCDMA) networks. Hence, the UL channel used for High-Speed Uplink Packet Access (HSUPA) is an Enhanced Dedicated Channel (E-DCH). The UL transmission data volume that can be transmitted by the UE on the UL is controlled by the network using so-called “grants” to prevent buffer overflow in the base station and RNC. The same “grant” mechanism will be found in LTE.

All in all, with HSPA in the UTRAN the data rates on the UL and DL have been significantly increased, but packet latency is still a critical factor. It takes quite a long time until the RRC connection in the first step and the radio bearer in the second step are established. Then, due to limited buffer memory and channel card resources in NodeB, an often quite progressive settings of user inactivity timers leads to transport channel-type switching and RRC state change procedures that can be summarized as intra-cell hard handovers. Hard handovers are characterized by the fact that the active radio connection including the radio bearer is interrupted for a few hundred milliseconds. Similar interruptions of the data transmission stream are observed during serving HSDPA cell change procedures (often triggered by a previous soft handover) due to flushing of buffered data in NodeB and rescheduling of data to be transmitted by the RNC. That such interruptions (occurring in dense city center areas with a periodicity of 10–20 seconds) are a major threat for delay-sensitive services is self-explanatory.

Hence, from the user plane QoS perspective the two major targets of LTE are:

·         a further increase in the available bandwidth and maximum data rate per cell as well as for the individual subscriber;
·         reducing the delays and interruptions in user data transfer to a minimum.

These are the reasons why LTE has an always-on concept in which the radio bearer is set up immediately when a subscriber is attached to the network. And all radio resources provided to subscribers by the E-UTRAN are shared resources, as shown below.



Here it is illustrated that the IP payload as well as RRC and NAS signaling are transmitted on the radio interfaces using unidirectional shared channels, the UL-SCH and the Downlink Shared Channel (DL-SCH). The payload part of this radio connection is called the radio bearer. The radio bearer is the bidirectional point-to-point connection for the user plane between the UE and eNodeB (eNB). The RAB is the user plane connection between the UE and the Serving Gateway (S-GW) and the S5 bearer is the user plane connection between the S-GW and public data network gateway (PDN-GW).

The end-to-end connection between the UE and PDN-GW, that is, the gateway to the IP world outside the operator’s network, is called a PDN connection in the E-UTRAN standard documents and a session in the core network standards. Regardless, the main characteristic of this PDN connection is that the IP payload is transparently tunneled through the core and the radio access network.

To control the tunnels and radio resources a set of control plane connections runs in parallel with the payload transport. On the radio interface RRC and NAS signaling messages are transmitted using the same shared channels and the same RLC transport layer that is used to transport the IP payload. RRC signaling terminates in the eNB (different from 3G UTRAN where RRC was transparently routed by NodeB to the RNC). The NAS signaling information is – as in 3G UTRAN – simply forwarded to the Mobility Management Entity (MME) and/or UE by the eNB. For registration and authentication the MME exchanges signaling messages with the central main subscriber databases of the network, the Home Subscriber Server (HSS).

To open, close, and modify the GTP/IP tunnel between the eNB and S-GW, the MME exchanges GTP signaling messages with the S-GW and the S-GW has the same kind of signaling connection with the PDN-GW to establish, release, and maintain the GTP/IP tunnel called the S5 bearer. Between the MME and eNB, together with the E-RAB, a UE context is established to store connection-relevant parameters like the context information for ciphering and integrity protection. This UE context can be stored in multiple eNBs, all of them belonging to the list of registered tracking areas for a single subscriber. Using this tracking area list and UE contexts, the inter-eNB handover delay can be reduced to a minimum.

The two most basic LTE standard documents are 3GPP 23.401 “GPRS Enhancements for E-UTRAN Access” and 3GPP 36.300 “Overall Description Evolved Universal Terrestrial Radio Access (E-UTRA) and E-UTRAN.” These two specs explain in a comprehensive way the major improvements in LTE that are pushed by an increasing demand for higher bandwidth and shorter latency of PS user plane services. The basic network functions and signaling procedures are explained as well as the network architecture, interfaces, and protocol stacks.







Google Profile

Post a Comment

0Comments

Post a Comment (0)
Subscribe Us

Get free daily email updates!

Follow us!