A Brief Summary Of 4g Lte

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Los Angeles

A Brief Summary of 4G LTE and Physical Layer

Fundamentals

A report submitted in partial satisfaction

of the requirements for the degree

Master of Science in Electrical Engineering

by

Zhiyuan Shen

2013

.cCopyright by

Zhiyuan Shen

2013

Abstract of the non-thesis

A Brief Summary of 4G LTE and Physical Layer

Fundamentals

by

Zhiyuan Shen

Master of Science in Electrical Engineering

University of California, Los Angeles, 2013

In the last decades, mobile communications has evolved from being an expensive technology for a few selected individuals to today��s ubiquitous systems used by a majority of the world��s population. LTE system is the main trend of wireless communication system after 3G. The Long-Term Evolution(LTE) system is the main trend of wireless communication system after 3G. It is often called ��4G��, but many also claim that LTE release 10, also referred to as LTE-Advanced, is the true 4G evolution step, with the .rst release of LTE (release 8) then being labeled as ��3.9G��.

How to take full advantage of bandwidth to improve the throughput and enhance the quality of service becomes more and more important. The main objective of this study is to make a brief introduction of LTE and investigate a comprehensive analysis of physical layer of LTE. Then, comparison of the through-out and fairness between di.erent schedulers for physical layer will be provided to see how they would improve the performance of physical layer. Finally, simulation results based on di.erent scheduling algorithm would be post for us to make trade o..

Table of Contents

1 Introduction ................................ 1

1.1 DRIVERSFORLTE ......................... 1

1.1.1 The3GEvolutionto4G ................... 2

1.1.2 PerformanceRequirements .................. 2

2 Overview and Channel Structure of LTE .............. 4

2.1 OVERALLSYSTEMARCHITECTURE . . . . . . . . . . . . . . 4

2.1.1 CoreNetwork ......................... 4

2.1.2 Radio-AccessNetwork .................... 4

2.2 RADIOPROTOCOLARCHITECTURE. . . . . . . . . . . . . . 6

2.2.1 Scheduling........................... 9

3 LTE Physical Layer ........................... 10

3.1 OVERALL TIME�CFREQUENCYSTRUCTURE. . . . . . . . . 10

3.2 DUPLEXSCHEMES ......................... 13

3.2.1 Frequency-Division Duplex (FDD) . . . . . . . . . . . . . 13

3.2.2 Time-DivisionDuplex(TDD) . . . . . . . . . . . . . . . . 13

4 Scheduling Approaches ......................... 15

4.1 MAXIMUMSUMRATEALGORITHM . . . . . . . . . . . . . . 16

4.2 MAXIMUMFAIRNESSALGORITHM. . . . . . . . . . . . . . . 17

4.3 PROPORTIONAL RATE CONSTRAINTS ALGORITHM . . . . 17

4.4 PROPORTIONAL FAIRNESS SCHEDULING . . . . . . . . . . . 18

4.5 PERFORMANCECOMPARISON ................. 19

5 Simulating the LTE Physical Layer ................. 21

5.1 SIMULATORSTRUCTURE .................... 21

5.1.1 OverallSimulatorStructure ................. 21

5.1.2 Transmitter .......................... 22

5.1.3 Receiver ............................ 23

5.2 SIMULATION RESULTS OF SCHEDULING ALGORITHM . . 24

6 Summary and Conclusions ....................... 28

References ................................... 29

List of Figures

1.1 Releasesof3GPPspeci.cationsforLTE. . . . . . . . . . . . . . . 2

1.2 LTEanditsevolution ........................ 3

2.1 Core-network(EPC)architecture .................. 5

2.2 Radio-access-networkinterfaces ................... 6

2.3 OverallRANprotocolarchitecture . . . . . . . . . . . . . . . . . 6

2.4 LTEprotocolarchitecture(downlink) . . . . . . . . . . . . . . . . 7

3.1 LTEtime-domainstructure ..................... 11

3.2 The LTE physical time�Cfrequencyresource ............ 11

3.3 DownlinkResourceGrid ....................... 12

3.4 Uplink/downlink time�Cfrequency structure for FDD and TDD . . 13

3.5 Di.erent downlink/uplink con.gurations in the case of TDD . . . 14

5.1 Overallsimulatorstructure. ..................... 22

5.2 StructureoftheLTEtransmitter. . . . . . . . . . . . . . . . . . . 22

5.3 StructureoftheLTEreceiver. .................... 24

5.4 Plots show the throughput ECDF for the average UE throughput (upper-left), spectral e.ciency (upper-right), wideband (lower-left) SINR and the mapping between between the wideband SINR and the average throughput for each UE. The results are computed from the UEs attached to the selected eNodeBs. Some overall statistics areshownonthegreytextbox. ................... 25

5.5 UE throughput empirical CDF for each of the scheduler setups . . 26

5.6 Scheduler comparison in terms of mean, edge, and peak UE through-

put................................... 27

List of Tables

2.1 EPCNodes .............................. 5

2.2 The di.erent protocol entities of the radio-access network . . . . . 8

3.1 Bandwidth and Resource blocks speci.cations . . . . . . . . . . . 12

4.1 Comparison of Di.erent Scheduling Algorithm . . . . . . . . . . . 20

Acknowledgments

I would like to express my gratitude to my supervisor Prof. Daneshrad whose help, stimulating suggestions and encouragement helped me in all the time of research for and writing of this report. Especially, I would like to give my special thanks to my parents whose patient love enabled me to complete this work.

Vita

2011-2013 M.S. (Electrical Engineering), UCLA.

CHAPTER 1

Introduction

The term ��Long Term Evolution�� (LTE) represents for the speci.ed technol-ogy on a novel air interface by 3GPP. Some of its targets include reduced latency, higher user data rates, improved system capacity and coverage and reduced cost of operation.[1] In this report, the brief idea of 4G LTE, especially physical layer, will be summarized based on Release notes, books, and papers.

1.1 DRIVERS FOR LTE

The evolution is driven by the creation and development of new services with advancement of the technology available for mobile systems. A prime driver for 4G LTE is the increasing need for Internet Protocol based services with a .xed broadband connection. The main service-related design parameters for a radio interface supporting a variety of services are:[2]

.

Data rate: Higher data rates for web browsing, streaming and .le transfer pushes the peak data rates close to Gbit/s for 4G.

.

Delay: Very low delay for interactive services.

.

Capacity: Total data rate on average from each deployed base station site and per hertz of licensed spectrum.

Another driver and essential design parameter for 4G LTE is the demand for more spectrum resources to expand systems.[2]

1.1.1 The 3G Evolution to 4G

In 2004, a workshop was organized on the 3GPP Long-Term Evolution (LTE) radio interface. At the beginning half year, most time was spent on de.ning the requirements, or design targets. These were documented in a 3GPP technical report and approved in June 2005. [3] Most notable are the requirements on high data rate at the cell edge and the importance of low delay, spectrum .exibility and maximum commonality between FDD and TDD solutions. Work has since then continued on LTE, with new features added in each release, as shown in Figure

1.1.

Figure 1.1: Releases of 3GPP speci.cations for LTE.

1.1.2 Performance Requirements

To achieve its goals, LTE must satisfy the following requirements[6]:

.

Data rates: Up to 100 Mb/s within a 20 MHz downlink spectrum alloca-tion(2 Ch MIMO) and 50 Mb/s within a 20 MHz uplink(single Ch Tx) or, equivalently, spectral e.ciency values of 5bps/Hz and 2.5 bps/Hz, respec-tively.

.

Throughput: The downlink average throughput per MHz: about 3-4 times higher than in the release 6. The uplink average user throughput per MHz: about 2-3 times higher than in the release 6.

.

Bandwidth: 1.4 MHz-20 MHz in both paired and unpaired spectrum.

.

Mobility: Optimized for low terminals speeds(0-15 km/h). Connection maintained for very high UEs speeds(up to 500 km/h).

.

Coverage: The above targets should be met for 5 km cells. Some slight degradation allowed in throughput and spectrum e.ciency for 30 km cells. requirements.

Figure 1.2: LTE and its evolution

CHAPTER 2

Overview and Channel Structure of LTE

2.1 OVERALL SYSTEM ARCHITECTURE

A .atter all-IP, packet-based architecture for Core Network(CN) evolution in the Evolved Packet Core(EPC) and overall system architecture of both the Radio-Access Network (RAN) are the parallel project on the LTE radio-access technology in 3GPP. This work resulted in a .at RAN architecture. [2]

2.1.1 Core Network

The Evolved Packet Core(EPC) is a radical evolution from the GSM and GPRS core network, which supports access to the packet-switched domain only, with no access to the circuit switched domain. Some di.erent types of nodes in EPC are illustrated in Figure 2.1

In addition to nodes listed in Table 2.1, the EPC also contains other types of nodes such as Policy and Charging Rules Function (PCRF), and the Home Subscriber Service (HSS) node. [2]

2.1.2 Radio-Access Network

A single type of node-eNodeB, works as a logical node and not a physical implementation, is used as a part of .at architecture in 4G LTE RAN. A common question would be whether a base station is a implementation of eNodeB. The

Table 2.1: EPC Nodes

Node Function Responsibilities

Mobility Management Control-plane node of Connection and release of bearers

Entity (MME) the EPC to a terminal, handling of IDLE

to ACTIVE transitions, and han-

dling of security keys.

Serving Gateway (S- User-plane node con- A mobility anchor when terminal-

GW) necting the EPC to s move between eNodeBs, and a

the RAN mobility anchor for other 3GPP

technologies

Packet Data Network Connects the EPC to Allocation of the IP address and

Gateway (PDN Gate- the internet quality of service enforcement for

way, P-GW) a speci.c terminal.

answer is it is a possible implementation of, but not the same as, an eNodeB.[2]

Figure 2.2: Radio-access-network interfaces

2.2 RADIO PROTOCOL ARCHITECTURE

The RAN protocol architecture is shown in Figure 2.3. A general overview of protocol architecture for the downlink is shown in Figure 2.4. Uplink transmissions is quite similar to the downlink structure.

Figure 2.3: Overall RAN protocol architecture

The di.erent protocol entities of the radio-access network are summarized in Table 2.2.[2]

Table 2.2: The di.erent protocol entities of the radio-access network

Protocol Entity Function

Packet Data Conver-gence Protocol (PD-CP) IP header compression, ciphering, integri-ty protection of the transmitted data, in-sequence delivery and duplicate removal for handover

Radio-Link Control (RLC) Segmentation and concatenation, retrans-mission handling, duplicate detection, and in-sequence delivery to higher layers

Medium-Access Con-trol (MAC) Handles multiplexing of logical channels, hybrid-ARQ retransmissions, and uplink and downlink scheduling

Physical Layer (PHY) Handles coding and decoding, modulation and demodulation, multi-antenna mapping, and other typical physical-layer functions

2.2.1 Scheduling

Scheduler, thought to be a part of the MAC layer though sometimes is more suitable to be treated as a separate entity, dynamic scheduling time-frequency resource-block pairs in uplink and downlink for users. However, uplink and down-link scheduling are independent of each other in 4G LTE. Resource blocks corre-spond to a time�Cfrequency unit of 1 ms times 180 kHz. In each 1 ms interval, the eNodeB takes a scheduling decision, and sends scheduling information to the selected set of terminals. The goal of di.erent scheduling algorithm is to take advantage of the channel variations between terminals and preferably schedule transmissions to a terminal on resources with advantageous channel conditions.

[2]

CHAPTER 3

LTE Physical Layer

Not surprisingly, as stated in previous chapter downlink and uplink of LTE Physical layer are quite di.erent. This is the result of the di.erence between eNodeB and UE in capabilities. Therefore, the features of physical layer will be described in the following sections.

3.1 OVERALL TIME�CFREQUENCY STRUCTURE

OFDM is used as the basic transmission scheme for both the downlink and u-plink of LTE physical layer. LTE subcarrier is carefully chosen to be 15 kHz, which provides a good balance between overhead from the cyclic pre.x against sensitivity to Doppler spread and shift and other types of frequency errors and inaccuracies. In the time domain, radio frames and subframes structure are illustrated in in Fig-ure 3.1. Di.erent time intervals are multiple of Ts =1/(15000 �� 2048). Di.erent cyclic-pre.x lengths, including seven and six OFDM symbols per slot respectively, may be used for di.erent subframes within a frame.[2]

The smallest physical resource in LTE is called a resource element, as illus-trated in Figure 3.2, 3.3. They are grouped into blocks that consists of 12 consec-utive subcarriers in the frequency domain and one 0.5 ms slot in the time domain. Thus, 7 �� 12 = 84 resource elements for a normal cyclic pre.x and 6 �� 12 = 72 resource elements for an extended cyclic pre.x.

The Table 3.1 shows the LTE bandwidth and resource con.guration.

Figure 3.2: The LTE physical time�Cfrequency resource

Bandwidth(MHz) 1.4 3 5 10 15 20

Number of RBs 6 15 25 50 75 100

Number of occupied subcarriers 72 180 300 600 900 1200

IFFT/FFT size 128 256 512 1024 1536 2048

Subcarrier spacing(KHz) 15 15 15 15 15 15

Table 3.1: Bandwidth and Resource blocks speci.cations

3.2 DUPLEX SCHEMES

LTE provides great .exibility in spectrum. It supports both FDD and TDD-based duplex operation. Time and frequency structures are shown in Figure 3.4.

Figure 3.4: Uplink/downlink time�Cfrequency structure for FDD and TDD

3.2.1 Frequency-Division Duplex (FDD)

As can be seen in the Figure 3.4, in the case of FDD, uplink transmission (fUL) and downlink transmission (fDL) is working under two carrier frequencies with ten subframes separately. For full-duplex capability, transmission and reception could occur simultaneously at a terminal, while for half-duplex capability, transmission and reception could not occur simultaneously. Meanwhile, the base station is always in full duplex capability. [2]

3.2.2 Time-Division Duplex (TDD)

As can be seen in the Figure 3.4, in the case of TDD, uplink transmission and downlink transmission is working under same carrier frequency, and they are separated in the time domain. With the switch in the special subframe 1 or sub-frame 6 between uplink and downlink, some subframes are used to work for uplink transmission, while the rest subframe are used to work for downlink transmission. As seen in the Figure 3.5, subframes 0 and 5 are always allocated for downlink transmission while subframe 2 is always allocated for uplink transmissions. The remaining subframes can then be .exibly allocated. [2]

Figure 3.5: Di.erent downlink/uplink con.gurations in the case of TDD

CHAPTER 4

Scheduling Approaches

There are many packet scheduling algorithms developed for single carrier wireless systems being proposed in the literature. However, the performances of these algorithms in multi-carrier wireless systems require further investigations. Scheduling in this system is performed at 1 ms interval (transmit time interval, TTI) and two consecutive RBs (in time domain) are assigned to a user for a TTI.[8] We focus on the class of techniques that attempt to balance the desire for high throughput with fairness among the users in the system.

The resource allocation is usually formulated as a constrained optimization problem, to either

1.

minimize the total transmit power with a constraint on the user data rate or

2.

maximize the total data rate with a constraint on total transmit power

The .rst objective is appropriate for .xed-rate applications(e.g., voice), while the second is more appropriate for bursty applications like data and other IP applications. Therefore, in this section we will focus on the rate adaptive algo-rithms(category 2), which are more relevant to LTE systems. [5]

4.1 MAXIMUM SUM RATE ALGORITHM

The objective of the maximum sum rate (MSR) algorithm is to maximize the sum rate of all users, given a total transmit power constraint.[9] This algorithm is optimal if the goal is to get as much data as possible through the system. The drawback of the MSR algorithm is that it is likely that a few users that are close to the base station (and hence have excellent channels) will be allocated all the system resources.

Let Pk,l denote user k��s transmit power in subcarrier l. The signal-to-interference-plus-noise ratio (SINR) for user k in subcarrier l, denoted as SINRk,l, can be expressed as

Pk,lh2

SINRk,l = k,l ��Kj=1,j.Pj,lh2 L

=k k,l + ��2 B Using the Shannon capacity formula as the throughput measure, the MSR algo-

rithm maximizes the following quantity:

KL�ơ�

max log(1 + SINRk,l)

Pk,l L

k=1 l=1

with the total power constraint

B

KL�ơ�

Pk,l �� Ptot

k=1 l=1

The sum capacity is maximized if the total throughput in each subcarrier is maxi-mized. Hence, the max sum capactiy optimization problem can be decoupled in L simpler problems, one for each subcarrier. Further, the sum capacity in subcarrier l, denoted as Cl, can be written as:

��

K

Cl = log(1 +

k=1

Pk,l

)

2

tB

Ptot,l . Pk,l + h2

k,l

L

where Ptot,l . Pk,l denotes other users�� interference to user k in subcarrier l. It is easy to show that Cl is maximized when all available power Ptot,l is assigned to just the single user with the largest channel gain in subcarrier l. This result agrees with intuition: give each channel to the user with the best gain in that channel. This is sometimes referred to as a ��greedy�� optimization. [5]

4.2 MAXIMUM FAIRNESS ALGORITHM

Although the total throughput is maximized by the MSR algorithm, in a cellular system like LTE where the path loss attenuation will vary by several orders of magnitude between users, some users will be extremely underserved by an MSR-based scheduling procedure. At the alternate extreme, the maximum fairness algorithm aims to allocate the subcarriers and power such that the minimum user��s data rate is maximized. This essentially corresponds to equalizing the data rates of all users, hence the name ��Maximum Fairness��. The maximum fairness algorithm can be referred to as a Max-Min problem, since the goal is to maximize the minimum data rate. [5]

4.3 PROPORTIONAL RATE CONSTRAINTS ALGORITH-M

A weakness of the Maximum Fairness algorithm is that the rate distribution among users is not .exible. Further, the total throughput is largely limited by the user with the worst SINR, as most of the resources are allocated to that user, which is clearly suboptimal. A generalization of the Maximum Fairness algorithm is the Proportional Rate Constraints (PRC) algorithm, whose objective is to maximize the sum throughput, with the additional constraint that each user��s data rate is proportional to a set of pre-determined system parameters (.k)K

k=1.

Mathematically, the proportional data rate��s constraint can be expressed as:

R1 R2 RK

== ... =

.1 .2 .K

where each user��s achieved data rate Rk is

L

Rk = �� �� log2(1 + Pk,lh2 )

k,lB k,l

��2 B

L

l=1 L

and��k,l can only be the value of either 1 or 0, indicating whether subcarrier l is used by user k or not. Clearly, this is the same setup as the Maximum Fairness algorithm if .k = 1 for each user. The advantage is that any arbitrary data rates can be achieved by varying the .k values. [5]

4.4 PROPORTIONAL FAIRNESS SCHEDULING

The three algorithms we have discussed thus far attempt to instantaneously achieve an objective such as the total sum throughput (MSR algorithm), max-imum fairness (equal data rates among all users), or pre-set proportional rates for each user. In addition to throughput and fairness, a third clement enters the tradeo., which is latency. In fact, the MSR. algorithm achieves both fairness and

. .. ..

maximum throughput if the users are assumed to have the same average channels in the long term (on the order of minutes, hours, or more), and there is no con-straint with regard to latency. Since latencies even on the order of seconds arc generally unacceptable, scheduling algorithms that balance latency and through-put and achieve some degree of fairness are needed. The most popular framework for this type of scheduling is Proportional Fairness (PF) scheduling. [10][11]

Let Rk(t) denote the instantaneous data rate that user k can achieve at time t, and Tk(t) be the average throughput for user k up to time slot t. The Proportional Fairness scheduler selects the user, denoted as k., with the highest

Rk(t)

for transmission. In the long-term, this is equivalent to selecting the user with

Tk(t)

the highest instantaneous rate relative to its mean rate, The average throughput Tk(t) for all users is then updated according to:

(1 . 1 )Tk(t)+ 1 Rk(t) k = k.

tc tc

Tk(t + 1) = (4.1) (1 . t1 c )Tk(t) k .

= k.

Since the Proportional Fairness scheduler selects the user with the largest instan-taneous data rate relative to its average throughput, ��bad�� channels for each user

are unlikely to be selected. On the other hand, users that have been consistently

underserved receive scheduling priority, which promotes fairness. The parameter tc controls the latency of the system. If tc is large, then the latency increases, with the bene.t of higher sum throughput. If tc is small, the latency decreases since the average throughput values change more quickly, at the expense of some throughput.

Let Rk(t, n) be the support able data rate for user k in subcarrier n, at time Rk(t)

slot t. Then for each subcarrier, the user with the largest is selected for

Tk(t) transmission. Let .k(t) denote the set of subcarriers in which user k is scheduled for transmission at time slot t, then the average user throughput is updated as 11

��

Tk(t +1) = (1 . )Tk(t)+ Rk(t, n)tc tc

n��.k(t)

for k = 1,2,... ,K. [5]

4.5 PERFORMANCE COMPARISON

In this section, we brie.y compare the performance of the various schedul-ing algorithms that we have discussed, in order to gain intuition on their relative performance and merits. Table 4.1 compares the four resource allocation algo-rithms for OFDMA systems. In summary, the Maximum Sum Rate allocation is the best in terms of total throughput, achieves a low computational complexity, but has a terribly unfair distribution of data rates, Hence, the MSR algorithm is viable only when all users have nearly identical channel conditions and a relatively large degree of latency is tolerable. The Maximum Fairness algorithm achieves complete fairness while sacri.cing signi.cant throughput, and so is appropriate only for .xed, equal rate applications. The Proportional Rate Constraints (PRC) algorithm allows a .exible tradeo. between these two extremes, but it may not always be possible to aptly set the desired rate constraints in real time. We also described the popular Proportional Fairness algorithm, which is fairly simple to [5]

Algorithm Sum Capacity Fairness Complexity Simple?

Maximum Sum Rate (MSR) Best Poor and in.exible Low Very simple[12]

Maximum Fairness (MF) Poor Best but in.exible Medium see [13]

Proportional Con-straints (PRO) Good Most .exible High see [14]

Proportional Fair-ness (PF) Good Flexible Low see [11]

Table 4.1: Comparison of Di.erent Scheduling Algorithm

implement and also achieves a practical balance between throughput and fairness.

CHAPTER 5

Simulating the LTE Physical Layer

Research and development of signal processing algorithms for UMTS Long Term Evolution (LTE) requires a realistic, .exible, and standard-compliant simu-lation environment. We use a MATLAB-based downlink physical-layer simulator for LTE from Vienna University of Technology. [15] We will introduce the struc-ture of this simulator in the next section.

Realistic performance evaluations of LTE require standard compliant simu-lators. For that reason, commercially available simulators have been developed, for example [16] [17] [18]. The simulator currently implements a standard compli-ant LTE downlink with its main features being Adaptive Modulation and Coding (AMC), MIMO transmission, multiple users, and scheduling. Most parts of the LTE simulator are written in plain Matlab-code. [15]

5.1 SIMULATOR STRUCTURE

5.1.1 Overall Simulator Structure

The LTE link level simulator consists of the following functional parts: one transmitting eNodeB, N receiver User Equipments (UEs), a downlink channel model over which only the Downlink Shared Channel (DL-SCH) is transmitted, signaling information, and an error-free uplink feedback channel with adjustable delay. The elements of the simulator are shown in Figure 5.1.[15]

5.1.2 Transmitter

The structure of the transmitter is depicted in Figure 5.2. The LTE downlink physical resources can thus be represented by a time-frequency resource grid in which each resource element corresponds to one OFDM subcarrier during one OFDM symbol interval. These resource elements are grouped into Resource Blocks (RBs) that consist of six to seven OFDM symbols (depending on the cyclic pre.x length utilized) and twelve consecutive subcarriers corresponding to a nominal resource block bandwidth of 180 kHz.

Figure 5.2: Structure of the LTE transmitter.

In the last step of the transmitter processing, the user data is generated de-pending on the previous Acknowledgement (ACK) signal. If the previous user data Transport Block (TB) was not acknowledged, the stored TB is retransmit-ted using a Hybrid Automatic Repeat reQuest (HARQ) scheme. Then a Cyclic Redundancy Check (CRC) is calculated and appended to each user��s TB. The da-ta of each user is independently encoded using a turbo encoder with Quadrature Permutation Polynomial (QPP)-based interleaving. Each block of coded bits is then interleaved and rate-matched with a target rate depending on the received Channel Quality Indicator (CQI) user feedback.

The encoding process is followed by the data modulation, which maps the channel-encoded TB to complex modulation symbols. Depending on the CQI, a modulation scheme is selected for the corresponding RB. Possible modulations for the DL-SCH are 4-QAM, 16-QAM, and 64-QAM.

The modulated transmit symbols are then mapped to up to four transmit antennas. This antenna mapping depends on the Rank Indicator (RI) feedback and provides di.erent multi-antenna schemes.

Finally, the individual symbols to be transmitted on each antenna are mapped to the resource elements. The assignment of a set of RBs to UEs is carried out by the scheduler based on the CQI reports from the UEs. The downlink scheduling is carried out on a subframe basis with a subframe duration of 1 ms. [15]

5.1.3 Receiver

The receiver structure is shown in Figure 5.3. Each UE receives the signal transmitted by the eNodeB and performs the reverse physical-layer processing of the transmitter. First, the receiver has to identify the RBs that carry its desig-nated information. The estimation of the channel is performed using the reference signals available in the resource grid. Based on this channel estimation, the qual-ity of the channel may be evaluated and the appropriate feedback information calculated. The channel knowledge is also used for the demodulation and soft-demapping of the OFDM signal.

Figure 5.3: Structure of the LTE receiver.

Finally, the UE performs HARQ combining and channel decoding. In or-der to cut down processing time, at every turbo iteration a CRC check of the decoded block is performed and if correct, decoding is stopped. The impact of the additional CRC checks is negligible, as a turbo decoder iteration requires a computation time three orders of magnitude bigger than the CRC check. After each evaluation, the receiver provides the information necessary to evaluate the .gures of merit, including user and cell throughput, Bit Error Ratio (BER), and Block Error Ratio (BLER). [15]

5.2 SIMULATION RESULTS OF SCHEDULING ALGO-RITHM

In this section we present simulation results obtained with the above standard compliant LTE link level simulator implemented in MATLAB.

Figure 5.4 shows a sample aggregate UE results for proportional fair, as well

as some cell-related statistics. For other algorithm, we could get the similar .gure. For the UE-related results, the UEs from which the results are obtained are the ones pertaining to any of the selected cells. Deactivated UEs (i.e. NaN values) are ignored. Scatterplot showing for each UE in the set the mapping between the wideband SINR and the throughput/spectral e.ciency. Since many points could be overlapping, there is the option of showing a binned (over wideband SINR) mean throughput mapping (in red).

Figure 5.4: Plots show the throughput ECDF for the average UE throughput (upper-left), spectral e.ciency (upper-right), wideband (lower-left) SINR and the mapping between between the wideband SINR and the average throughput for each UE. The results are computed from the UEs attached to the selected eN-odeBs. Some overall statistics are shown on the grey text box.

The resulting plots are shown in Figure 5.5 5.6, and depict a comparison between round robin, proportional fair, best CQI, max min, max throughput, and resource fair scheduling algorithms for a 2 �� 2 CLSM LTE setup. The mean value

is marked in the CDF as a black dot.

Figure 5.5: UE throughput empirical CDF for each of the scheduler setups

From the simulation result, we could .nd max min has best fairness perfor-mance and mean throughput. However, a weakness of the Maximum Fairness algorithm is that the rate distribution among users is not .exible. The total throughput is largely limited by the user with the worst SINR as stated previ-ously. Consider its weakness, we may think proportional fair scheduler has better performance with good fairness and large mean throughput. Note that best CQI scheduler has the worst fairness performance as predicted since it always assign the resources to the best CQI channel.

CHAPTER 6

Summary and Conclusions

Evolution of wireless communication has been expanded in recent years by in-troducing the 3rd generation of standards. This evolution is based on demands for higher data rate, lower latency, and better coverage resulting in user satisfaction.

In this report, I make a brief introduction of the history of 4G LTE , in-cluding its overview, how it comes and the feature of standard. According to the [4], 4G LTE should satisfy the requirements on speci.c areas. This is shown in Chapter 1. After that, the overall and channel structure of LTE are discussed in Chapter 2. It provides an overview of the whole picture of LTE. In next Chapter, I focus on the Physical Layer. Speci.cally, I illustrate the frequency structure, duplex schemes, transport channel and downlink reference signals. In chapter 4, I summarize some most famous scheduling algorithm, which schedules the Resource Blocks in the physical layer. Good scheduling algorithm increases the throughput of physical layer while guarantees the fairness between UEs. In last chapter, I use the simulator to simulate the di.erent algorithm and compare the throughput and fairness. In reality, we should make trade-o. between suitable scheduling algo-rithms. Due to the evolution track toward 4G wireless communication standards, physical layer throughput can be further improved since new technologies.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now