Presentation

Preview:

Citation preview

Presented by,K Swaraj GowthamG Srinivasa RaoB Gopi Krishna

TURBO CODES

Turbo Code Concepts

Log Likelihood Algebra

Interleaving & Concatenated Codes

Encoding With R S C

Turbo Codes

Objectives Studying channel coding Understanding channel capacity Ways to increase data rate Provide reliable communication link

Introduction

Communication System

Structural modular approach with variouscomponents

ChannelCoding

Source Coding

ModulationFormattingDigitization

MultiplexingAccess

techniques

send

receive

CHANNEL CODING

Waveform M-ary signaling Antipodal Orthogonal Trellis coded modulation

Structured sequence Block Convolution Turbo

Can be categorized Wave form signal design Structured sequences

Better detectible signals Added redundancy

Structured Redundency

Channel encoderChannel encoder

Input word

k-bit

Output word

n-bit

Redundancy = (n-k)

Code rate = k/n

codeword

Code sequence

A turbo code is a refinement of the concatenated encoding structure plus an iterative algorithm for decoding the associated code sequence.

Concatenated coding scheme is a method for achieving large coding gains by combining two or more relatively simple building blocks or component codes.

TURBO CODES

Likelihood Functions: The mathematical foundations of hypothesis testing rests on Baye’s

theorem. A Posteriori Probability (APP)of a decision in terms of a continuous-

valued random variable x as

TURBO CODE CONCEPTS

Before the experiment, there generally exists an a priori probability P(d = i). The experiment consists of using Equation (1) for computing the APP, P(d = i|x), which can be thought of as a “refinement” of the prior knowledge about the data, brought about by examining the received signal x.

The Two-Signal Class Case:

P (d = + 1| x)

H1><H2

P(d=-1|x)

Binary logical elements 1 and 0 are represented electronically by voltages +1 and -1 where ‘d’ represents this voltages.

The rightmost function, p(x|d = +1), shows the pdf of the random variable x conditioned on d = +1 being transmitted. The leftmost function, p(x|d = -1), illustrates a similar pdf conditioned on d = -1 being transmitted.

A line subtended from Xk an arbitrary value taken from the full range of values of X, intercepts the two likelihood functions, yielding two likelihood values ℓ1 = p(xk|dk = +1) and ℓ2 = p(xk|dk = -1).

General expression for the MAP rule in terms of APPs is

The previous equation is expressed in terms of ratio, yielding the so-called likelihood ratio test, as follows

Log-Likelihood Ratio

By logging on both sides to the MAP ruled APPs is

To simplify the notation, it is rewritten as

At the decoder it is equal to

This equation shows the output LLR of a systematic decoder Consists of channel measurement , a prior knowledge of the data, and an extrinsic LLR stemming solely from the decoder.This soft decoder output L(dˆ ) is a real number that provides a hard decision as well as the reliability of that decision. The sign of L(dˆ ) denotes the hard decision; that is, for positive values of L(dˆ ) decide that d = +1, and for negative values decide that d = -1. The magnitude of L(dˆ ) denotes the reliability of that decision.

Log Likelihood Algebra

For Statistically independent data d, the sum of two log likelihood ratios are defined as

INTERLEAVING

This is the concept that aids much in case of channels with memory.

A channel with memory exhibits mutually dependent transmission impairments.

A channel with multipath fading is an example for channel with memory.

Errors caused due to disturbances in these types of channels – Burst Errors.

Interleaving only requires a knowledge of span of the memory channel.

Interleaving at the Tx’r side and de-interleaving at the Rx’r side causes the burst errors to be corrected.

The interleaver shuffles the code symbols over a span of several block lengths or constraint lengths.

It makes the memory channel look like memoryless one for decoder.

Two types of interleavers:- ->Block Interleavers. ->Convolutional Interleavers.

Block Interleaving

A block interleaver accepts the coded symbols in blocks from the encoder,permutes the symbols,and then feeds the rearranged ones to the modulator.

The minimum end-to-end delay is (2MN-2M+2) symbol times where the encoded sequence is written as M*N array format.

It needs a memory of 2MN symbol times.

The choice of M is dependent on the coding scheme used.

The choice of N for t-error-correcting codes must overbound the expected burst length divided by t.

Convolutional Interleaving

In this type, the code symbols are sequentially shifted into the bank of N registers; each successive register contains J symbols more storage than the preceding one.

In this case, the end-to-end delay is M(N-1) and the memory required is M(N-1)/2.

Concatenated Codes

A concatenated code uses two levels on coding : an inner code and an outer code (higher rate).

o Popular concatenated codes :-

Convolutional codes with Viterbi decoding as the inner code and Reed-Solomon codes as the outer code.

o

The purpose is to reduce the overall complexity, yet achieving the required error performance.

However, the concatenated system performance is severely degraded by correlated errors among successive smbols.

Encoding with Recursive systematic codes

Turbo codes are generated by parallel concatenation of component convolutional codes

Consider an encoder with data rate ½ ,constraint length K ,i/p to encoder dk. The corresponding code word (Uk,Vk) is

G1 = { g1i } and G2 = { g2i } are the code generators, and dk is represented as a binary digit

This encoder can be visualized as a discrete-time finite impulse response (FIR) linear system, giving rise to the familiar nonsystematic

convolutional (NSC) code

An example for NSC code G1={111},G2={101}, K=3, bit

rate = 1/2.

At large Eb/N0 values, the error performance of an NSC is better than that of a systematic code

infinite impulse response (IIR) convolutional codes [3] has been proposed as building blocks for a turbo code

For high code rates RSC codes result in better error performance than the best NSC codes at any value of Eb/N0

an RSC code, with K = 3, where ak is recursively calculated as

g′i is respectively equal to g1i

if uk = dk, and to g2i if vk = dk.

An ex for Recursive encoder and its trellis diagram:6(a),6(b)

Trellis diagram

Example: Recursive Encoders and Their Trellis Diagrams

a) Using the RSC encoder in Figure 6(a), verify the section of the trellis

structure (diagram) shown in Figure 6(b).

b) For the encoder in part a), start with the input data sequence{dk} = 1 1 1 0, and show the step-by-step encoder procedure for finding the output codeword.

Validation of trellis diagram

Encoding a bit sequence with RSC encoder

Concatenation of RSC codesGood turbo codes have been

constructed from component codes having short lengths(K = 3 to 5).

There is no limit to the number of encoders that may be concatenated.

we should avoid pairing low-weight codewords from one encoder with low-weight codewords from the other encoder. Many such pairings can be avoided by proper design of the interleaver

Fig :parallel concatenation of RSC codes

If the component encoders are not recursive, the unit weight input sequence 0 0 … 0 0 1 0 0 … 0 0 will always generate a low-weight codeword at the input of a second encoder for any interleaver design.

if the component codes are recursive,a weight-1 input sequence generates an infinite impulse response

For the case of recursive codes, the weight-1 input sequence does not

yield the minimum-weight codeword out of the encoder

Turbo code performance is largely influenced by minimum-weight codewords

Recommended