28
Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Embed Size (px)

Citation preview

Page 1: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Modulation, Demodulation and Coding Course

Term 3 - 2008Catharina Logothetis

Lecture 11

Page 2: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 2

Last time, we talked about:

Another class of linear codes, known as Convolutional codes.

We studied the structure of the encoder and different ways for representing it.

Page 3: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 3

Today, we are going to talk about:

What are the state diagram and trellis representation of the code?

How the decoding is performed for Convolutional codes?

What is a Maximum likelihood decoder?

What are the soft decisions and hard decisions?

How does the Viterbi algorithm work?

Page 4: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 4

Block diagram of the DCS

Informationsource

Rate 1/n Conv. encoder

Modulator

Informationsink

Rate 1/n Conv. decoder

Demodulator

sequenceInput

21 ,...),...,,( immmm

bits) coded ( rdBranch wo

1

sequence Codeword

321

,...),...,,,(

n

nijiii

i

,...,u,...,uuU

UUUU

G(m)U

,...)ˆ,...,ˆ,ˆ(ˆ 21 immmm

dBranch worper outputs

1

dBranch worfor outputsr Demodulato

sequence received

321

,...),...,,,(

n

nijii

i

i

i

,...,z,...,zzZ

ZZZZ

Z

Channel

Page 5: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 5

State diagram A finite-state machine only

encounters a finite number of states. State of a machine: the smallest

amount of information that, together with a current input to the machine, can predict the output of the machine.

In a Convolutional encoder, the state is represented by the content of the memory.

Hence, there are states.12 K

Page 6: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 6

State diagram – cont’d

A state diagram is a way to represent the encoder.

A state diagram contains all the states and all possible transitions between them.

There can be only two transitions initiating from a state.

There can be only two transitions ending up in a state.

Page 7: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 7

State diagram – cont’d

10 01

00

11

outputNext state

inputCurrent state

101

01011

011

10010

001

11001

111

00000

0S

1S

2S

3S

0S

2S

0S

2S

1S

3S

3S1S

0S

1S2S

3S

1/11

1/00

1/01

1/10

0/11

0/00

0/01

0/10

Input

Output(Branch word)

Page 8: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 8

Trellis – cont’d

The Trellis diagram is an extension of the state diagram that shows the passage of time. Example of a section of trellis for the rate ½

code

Timeit 1it

State

000 S

011 S

102 S

113 S

0/00

1/10

0/11

0/10

0/01

1/11

1/01

1/00

Page 9: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 9

Trellis –cont’d

A trellis diagram for the example code

0/11

0/10

0/01

1/11

1/01

1/00

0/00

0/11

0/10

0/01

1/11

1/01

1/00

0/00

0/11

0/10

0/01

1/11

1/01

1/00

0/00

0/11

0/10

0/01

1/11

1/01

1/00

0/00

0/11

0/10

0/01

1/11

1/01

1/00

0/00

6t1t 2t 3t 4t 5t

1 0 1 0 0

11 10 00 10 11

Input bits

Output bits

Tail bits

Page 10: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 10

Trellis – cont’d

1/11

0/00

0/10

1/11

1/01

0/00

0/11

0/10

0/01

1/11

1/01

1/00

0/00

0/11

0/10

0/01

0/00

0/11

0/00

6t1t 2t 3t 4t 5t

1 0 1 0 0

11 10 00 10 11

Input bits

Output bits

Tail bits

Page 11: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 11

Optimum decoding

If the input sequence messages are equally likely, the optimum decoder which minimizes the probability of error is the Maximum likelihood decoder.

The ML decoder, selects a codeword among all the possible codewords which maximizes the likelihood function where is the received sequence and is one of the possible codewords:

)( )(mp U|Z Z)(mU

)(max)( if Choose )(

allover

)()( mmm pp(m)

U|ZU|ZUU

ML decoding rule:

codewords to search!!!

L2

Page 12: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 12

ML decoding for memory-less channels

Due to the independent channel statistics for memoryless channels, the likelihood function becomes

and equivalently, the log-likelihood function becomes

The path metric up to time index , is called the partial path metric.

1 1

)(

1

)()(21,...,...,,

)( )|()|()|,...,...,,()(21

i

n

j

mjiji

i

mii

mizzz

m uzpUZpUZZZppi

U|Z

1 1

)(

1

)()( )|(log)|(log)(log)(i

n

j

mjiji

i

mii

m uzpUZppm U|ZUPath metric Branch metric Bit metric

ML decoding rule:Choose the path with maximum metric among all the paths in the trellis. This path is the “closest” path to the transmitted sequence.

""i

Page 13: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 13

Binary symmetric channels (BSC)

If is the Hamming distance between Z and U, then

Modulatorinput

1-p

p

p

1

0 0

1

)( )(mm dd UZ,

)1log(1

log)(

)1()( )(

pLp

pdm

ppp

nm

dLdm mnm

U

U|Z

Demodulatoroutput )0|0()1|1(1

)1|0()0|1(

ppp

ppp

ML decoding rule:Choose the path with minimum Hamming distance from the received sequence.

Size of coded sequence

Page 14: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 14

AWGN channels

For BPSK modulation the transmitted sequence corresponding to the codeword is denoted by where and

and . The log-likelihood function becomes

Maximizing the correlation is equivalent to minimizing the Euclidean distance.

)(mU

cij Es

)(

1 1

)()( m

i

n

j

mjijiszm SZ,U

Inner product or correlation between Z and S

ML decoding rule:Choose the path which with minimum Euclidean distance to the received sequence.

),...,,...,( )()()(1

)( mni

mji

mi

mi sssS ,...),...,,( )()(

2)(

1)( m

immm SSSS

Page 15: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 15

Soft and hard decisions

In hard decision: The demodulator makes a firm or hard

decision whether a one or a zero was transmitted and provides no other information for the decoder such as how reliable the decision is.

Hence, its output is only zero or one (the output is quantized only to two level) that are called “hard-bits”.

Decoding based on hard-bits is called the “hard-decision decoding”.

Page 16: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 16

Soft and hard decision-cont’d

In Soft decision: The demodulator provides the decoder with

some side information together with the decision.

The side information provides the decoder with a measure of confidence for the decision.

The demodulator outputs which are called soft-bits, are quantized to more than two levels.

Decoding based on soft-bits, is called the “soft-decision decoding”.

On AWGN channels, a 2 dB and on fading channels a 6 dB gain are obtained by using soft-decoding instead of hard-decoding.

Page 17: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 17

The Viterbi algorithm

The Viterbi algorithm performs Maximum likelihood decoding.

It finds a path through the trellis with the largest metric (maximum correlation or minimum distance).

It processes the demodulator outputs in an iterative manner.

At each step in the trellis, it compares the metric of all paths entering each state, and keeps only the path with the smallest metric, called the survivor, together with its metric.

It proceeds in the trellis by eliminating the least likely paths.

It reduces the decoding complexity to !12 KL

Page 18: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 18

The Viterbi algorithm - cont’d

Viterbi algorithm:A. Do the following set up:

For a data block of L bits, form the trellis. The trellis has L+K-1 sections or levels and starts at time and ends up at time .

Label all the branches in the trellis with their corresponding branch metric.

For each state in the trellis at the time which is denoted by , define a parameter

B. Then, do the following:

it}2,...,1,0{)( 1 K

itS ii ttS ),(

1tKLt

Page 19: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 19

The Viterbi algorithm - cont’d1. Set and 2. At time , compute the partial path metrics

for all the paths entering each state.3. Set equal to the best partial path

metric entering each state at time . Keep the survivor path and delete the dead paths from the trellis.

1. If , increase by 1 and return to step 2.

A. Start at state zero at time . Follow the surviving branches backwards through the trellis. The path found is unique and corresponds to the ML codeword.

0),0( 1 t .2i

it

ii ttS ),(it

KLi i

KLt

Page 20: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 20

Example of Hard decision Viterbi decoding

1/11

0/00

0/10

1/11

1/01

0/00

0/11

0/10

0/01

1/11

1/01

1/00

0/00

0/11

0/10

0/01

0/00

0/11

0/00

6t1t 2t 3t 4t 5t

)101(m)1110001011(U)0110111011(Z

Page 21: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 21

Example of Hard decision Viterbi decoding-cont’d

Label all the branches with the branch metric (Hamming distance)

0

2

0

1

2

1

0

1

1

0

1

2

2

1

0

2

1

1

1

6t1t 2t 3t 4t 5t

1

0

ii ttS ),(

Page 22: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 22

Example of Hard decision Viterbi decoding-cont’d

i=2

0

2

0

1

2

1

0

1

1

0

1

2

2

1

0

2

1

1

1

6t1t 2t 3t 4t 5t

1

0 2

0

Page 23: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 23

Example of Hard decision Viterbi decoding-cont’d

i=3

0

2

0

1

2

1

0

1

1

0

1

2

2

1

0

2

1

1

1

6t1t 2t 3t 4t 5t

1

0 2 3

0

2

30

Page 24: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 24

Example of Hard decision Viterbi decoding-cont’d

i=4

0

2

0

1

2

1

01

1

0

1

2

2

1

0

2

1

1

1

6t1t 2t 3t 4t 5t

1

0 2 3 0

3

2

3

0

2

30

Page 25: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 25

Example of Hard decision Viterbi decoding-cont’d

i=5

0

2

0

1

2

1

01

1

0

1

2

2

1

0

2

1

1

1

6t1t 2t 3t 4t 5t

1

0 2 3 0 1

3

2

3

20

2

30

Page 26: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 26

Example of Hard decision Viterbi decoding-cont’d

i=6

0

2

0

1

2

1

01

1

0

1

2

2

1

0

2

1

1

1

6t1t 2t 3t 4t 5t

1

0 2 3 0 1 2

3

2

3

20

2

30

Page 27: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 27

Example of Hard decision Viterbi decoding-cont’d

Trace back and then:

0

2

0

1

2

1

01

1

0

1

2

2

1

0

2

1

1

1

6t1t 2t 3t 4t 5t

1

0 2 3 0 1 2

3

2

3

20

2

30

)100(ˆ m)0000111011(ˆ U

Page 28: Modulation, Demodulation and Coding Course Term 3 - 2008 Catharina Logothetis Lecture 11

Lecture 11 28

Example of soft-decision Viterbi decoding

)101(ˆ m)1110001011(ˆ U

)101(m)1110001011(U

)1,3

2,1,

3

2,1,

3

2,

3

2,

3

2,

3

2,1(

Z

5/3

-5/3

4/3

0

0

1/3

1/3

-1/3

-1/3

5/3

-5/3

1/3

1/3

-1/3

6t1t 2t 3t 4t 5t-5/3

0 -5/3 -5/3 10/3 1/3 14/3

2

8/3

10/3

13/33

1/3

5/35/3

ii ttS ),(

Branch metric

Partial metric

1/3

-4/35/3

5/3

-5/3