142
Contents 10 System Design with Codes 147 10.1 Convolutional Codes and Implementation ........................... 148 10.1.1 Notation and Terminology ................................ 148 10.1.2 The Convolutional Code ................................. 149 10.1.3 Examples ......................................... 151 10.1.4 Trellis Diagrams for Convolutional Codes ....................... 154 10.1.5 Error probabilities and Performance .......................... 155 10.2 Convolutional Coding Tables and Decoder Complexity .................... 157 10.2.1 Implementations ..................................... 157 10.2.2 Coding Gain for Convolutional Codes ......................... 159 10.2.3 Binary Symmetric Channel Error Probability ..................... 159 10.2.4 Tables of Convolutional Codes ............................. 160 10.2.5 Complexity ........................................ 163 10.2.6 Forcing a Known Ending State ............................. 163 10.3 Coset Codes, Lattices, and Partitions .............................. 165 10.3.1 Gain of Coset Codes ................................... 166 10.3.2 Mapping By Set Partitioning .............................. 169 10.4 One- and Two-dimensional Trellis Codes ............................ 174 10.4.1 Rate 1/2 Code ...................................... 174 10.4.2 A simple rate 2/3 Trellis Code ............................. 174 10.4.3 Code Design in One and Two Dimensions ....................... 177 10.4.4 Decoder Complexity Measures ............................. 184 10.5 Multidimensional Trellis Codes ................................. 186 10.5.1 Lattice Codes and Multidimensional Partitioning ................... 186 10.5.2 Multidimensional Trellis Codes ............................. 199 10.6 Theory of the Coset Code Implementation ........................... 207 10.6.1 Encoder Simplification .................................. 207 10.6.2 Decoder Complexity ................................... 212 10.6.3 Decoding the Gossett (E 8 ) Lattice ........................... 214 10.6.4 Lattice Decoding Table ................................. 214 10.7 Shaping Codes .......................................... 216 10.7.1 Non-Equiprobable Signaling and Shell Codes ..................... 218 10.7.2 Voronoi/Block Shaping ................................. 224 10.7.3 Trellis Shaping ...................................... 233 10.8 Block Codes ............................................ 237 10.8.1 Block Code Performance Analysis ........................... 237 10.8.2 Cyclic Codes ....................................... 237 10.8.3 Reed Solomon Encoder Implementations ........................ 239 10.8.4 Reed Solomon Decoder Implementations ........................ 239 10.8.5 Block Code Selection ................................... 239 Exercises - Chapter 10 ......................................... 240 145

Convolutional Coding Chap10

  • Upload
    dav

  • View
    227

  • Download
    0

Embed Size (px)

DESCRIPTION

codificación convolucional para transmisión de señales

Citation preview

Contents10SystemDesignwithCodes 14710.1 ConvolutionalCodesandImplementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 14810.1.1 NotationandTerminology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14810.1.2 TheConvolutionalCode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14910.1.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15110.1.4 TrellisDiagramsforConvolutionalCodes . . . . . . . . . . . . . . . . . . . . . . . 15410.1.5 ErrorprobabilitiesandPerformance . . . . . . . . . . . . . . . . . . . . . . . . . . 15510.2 ConvolutionalCodingTablesandDecoderComplexity . . . . . . . . . . . . . . . . . . . . 15710.2.1 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15710.2.2 CodingGainforConvolutionalCodes . . . . . . . . . . . . . . . . . . . . . . . . . 15910.2.3 BinarySymmetricChannelErrorProbability . . . . . . . . . . . . . . . . . . . . . 15910.2.4 TablesofConvolutionalCodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16010.2.5 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16310.2.6 ForcingaKnownEndingState . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16310.3 CosetCodes,Lattices,andPartitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16510.3.1 GainofCosetCodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16610.3.2 MappingBySetPartitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16910.4 One-andTwo-dimensionalTrellisCodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . 17410.4.1 Rate1/2Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17410.4.2 Asimplerate2/3TrellisCode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17410.4.3 CodeDesigninOneandTwoDimensions . . . . . . . . . . . . . . . . . . . . . . . 17710.4.4 DecoderComplexityMeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18410.5 MultidimensionalTrellisCodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18610.5.1 LatticeCodesandMultidimensionalPartitioning . . . . . . . . . . . . . . . . . . . 18610.5.2 MultidimensionalTrellisCodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19910.6 TheoryoftheCosetCodeImplementation. . . . . . . . . . . . . . . . . . . . . . . . . . . 20710.6.1 EncoderSimplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20710.6.2 DecoderComplexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21210.6.3 DecodingtheGossett(E8)Lattice . . . . . . . . . . . . . . . . . . . . . . . . . . . 21410.6.4 LatticeDecodingTable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21410.7 ShapingCodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21610.7.1 Non-EquiprobableSignalingandShellCodes . . . . . . . . . . . . . . . . . . . . . 21810.7.2 Voronoi/BlockShaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22410.7.3 TrellisShaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23310.8 BlockCodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23710.8.1 BlockCodePerformanceAnalysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 23710.8.2 CyclicCodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23710.8.3 ReedSolomonEncoderImplementations. . . . . . . . . . . . . . . . . . . . . . . . 23910.8.4 ReedSolomonDecoderImplementations. . . . . . . . . . . . . . . . . . . . . . . . 23910.8.5 BlockCodeSelection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239Exercises-Chapter10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240145AFinite-FieldAlgebraReview 254A.1 Groups,Rings,andFields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254A.2 GaloisFields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255BVariousResultsinEncoderRealizationTheory 262B.1 InvariantFactorsDecompositionandtheSmithCanonicalForms . . . . . . . . . . . . . . 262B.2 CanonicalRealizationsofConvolutionalEncoders . . . . . . . . . . . . . . . . . . . . . . . 267B.2.1 InvariantFactors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267B.2.2 MinimalEncoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272B.2.3 BasicEncoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274B.2.4 ConstructionofMinimalEncoders . . . . . . . . . . . . . . . . . . . . . . . . . . . 275B.2.5 CanonicalSystematicRealization. . . . . . . . . . . . . . . . . . . . . . . . . . . . 277CLattices 279C.1 ElementaryLatticeOperations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280C.2 BinaryLatticesandCodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280C.2.1 Associationoflatticeswithbinarycodes. . . . . . . . . . . . . . . . . . . . . . . . 282C.2.2 16,24,and32dimensionalpartitioningchains . . . . . . . . . . . . . . . . . . . . . 285146Chapter10SystemDesignwithCodesAsasystemdesigner, anengineeroftenusescodestoimproveperformance. Chapters8and9havesuggestedthattransmissionofsequenceswithconsequentsequencedetectioncanimproveperformancesignicantly. AnupperboundondatarateforsuchimprovementisthechannelcapacityofChapter8.More than a half-century after the introduction of capacity, the cumulative eort of many ne engineersandmathematicianshasproducedarepertoireof goodcodesthatwithincreasingcomplexitycanbeusedtoapproachcapacityonboththeBSCandtheAWGN.This chapter will focus onthe tabulationanduse of these codes, leavingthe inner details andcomparisonsof thecodesaswell asthedesignof yetothercodestothecontinuingeortsof codingtheorists. Theobjectiveistohaveareferencetowhichthedesignercanreferandestimatethevariouscoding gains, complexities, and types of codes applicable in any given application. The codes listed herewill be good ones without aberrant undesirable properties or hidden problems, mainly because the eortsofotherstounderstandcodinghasenabledsuchundesirablecodestobeeliminatedwhensearchingforgoodcodes.Convolutional codes are studied for b < 1 in Sections 10.1 and 10.2 before progressing to coset/trelliscodes(whichinfactarebasedoninternal convolutional codes)forb 1inSection10.3. Section10.2alsodiscussescodecomplexity, whichislargelyinthedecoder, andtabulatesmanyofthebest-knownconvolutional codes and their essential parameters,as well as illustrating implementation. Sections 10.4and10.5tabulateanumberof multidimensional latticesandcosetcodes. Theappendicesdelveintosomemoretheoreticalaspectsofcodingandlatticesfortheinterestedreader.Codeintricaciesandstructureisheavilystudiedandpublished. Wereferthereadertothemanynetextbooksinthisarea,inparticularS.WickersErrorControl SystemsforDigital CommunicationandStorage(PrenticeHall, 1995)onblockcodes, R. JohhannessonandK. ZiogangirovsFundamentalsofConvolutionalCoding(IEEEPress,1998),andC.SchlegalandL.PerezTrellisCoding(IEEEPress,1997).14710.1 ConvolutionalCodesandImplementationThe theoryandrealizationof convolutional codes makes extensive use of concepts inalgebra, mostspecically the theory of groups and nite elds. The very interested reader may want to read AppendixA. This section does not attempt rigor, but rather use of the basic concepts of convolutional codes. Oneneedsonlyknowbinarymodulo-2arithmeticisclosedunderadditionandmultiplication,andbasicallyconvolutionalcodesdealwithvectorsofbinarysequencesgeneratedbyasequentialencoder. AppendixBdescribesmethodsfortranslationofconvolutionalencoders.10.1.1 NotationandTerminologyLetFbethenite(binaryormodulo-2)eldwithelements 0, 1.1Asequenceofbits, am,inFcanberepresentedbyitsDtransforma(D) =

m=amDm. (10.1)Thesumsinthissectionthatdeal withsequencesinniteeldsarepresumedtobemodulo-2sums.Inthecaseoftheniteeld,thevariableDmustnecessarilybeconstruedasaplaceholder,andhasnorelationtotheFouriertransformofthesequence(i.e., D ,=eT). Forthisreason,a(D)usesalowercase a in the transform notation a(D),rather than the upper case A that was conventionally used whendealing with sequences in the eld of real numbers. Note such lower-case nomenclature in D-transformshasbeentacitlypresumedinChapters8and9also.F[D]denotesthesetofallpolynomials(nite-lengthcausalsequences)withcoecientsinF,F[D]=_a(D)[ a(D) =

m=0amDm, am F, 0, < , :_; (10.2)where :isthesetof integers.2ThesetF[D] canalsobedescribedasthesetof all polynomialsinDwithniteHammingweight. LetFr(D)betheeldformedbytakingratiosa(D)/b(D)ofanytwopolynomialsinF[D],withb(D) ,= 0andb0= 1:Fr(D)=_c(D)[ c(D) =a(D)b(D), a(D) F[D], b(D) ,= 0, b(D) F[D]withb0= 1_. (10.3)ThissetFr(D)issometimesalsocalledtherationalfractionsofpolynomialsinD.Denition10.1.1(DelayofaSequence) The delay, del(a), of a nonzero sequence, a(D),is the smallest time index, m, for whichamis nonzero. The delayof the zerosequencea(D) = 0isdenedtobe . ThisisequivalenttothelowestpowerofDina(D).Thischapterassumesthesequentialencoderstartsatsometime, callitk=0, andsoallsequencesofinterestwillhavedelaygreaterthanorequalto0. Forexample,thedelayofthesequence1 + D + D2is0,whilethedelayofthesequenceD5+D10is5.Denition10.1.2(DegreeofaSequence) Thedegree,deg(a),ofanonzerosequence,a(D)isthelargesttimeindex,m,forwhichamisnonzero. Thedegreeofthezerosequencea(D) = 0isdenedtobe . ThisisequivalenttothehighestpowerofDina(D).Forexample,thedegreeofthesequence1 +D +D2is2,whilethedegreeofthesequenceD5+D10is10.1Finite elds are discussed briey in Appendix A, along with the related concept of a ring, which is almost a eld exceptthatdivisionisnotwelldenedorimplemented.2ThesetF[D]canbeshowntobearing(seeAppendixA).148[ ]m m m ku u u, 1 , 2 ,... =mu!"#$ [ ]m m m nv v v v, 1 , 2 ,... =m1 11 0+ !" !modulator Figure10.1: Generalconvolutionalencoderblockdiagram.Denition10.1.3(LengthofaSequence) Thelength, len(a), of anonzerosequence,a(D)isdenedbylen(a)= deg(a) del(a) + 1 . (10.4)Thelengthofthezerosequencea(D) = 0isdenedtobelen(0)= 0.For example, the length of the sequence 1 +D+D2is 3, while the length of the sequence D5+D10is 6.10.1.2 TheConvolutionalCodeThegeneralconvolutionalencoderisillustratedinFigure10.1. Codingtheoristsoftenuseatimeindexof m instead of k, which is instead the number of input bits to a sequential encoder (b earlier in this text).Thischapterthentriestomaintainconsistencywiththatsignicantliteratureonconvolutional codesinparticularwithkinputbitsateachtimemproducingnoutputbits. Aswillbecomeevident, thereisnoneedtocalltheinputmessagembecauseitwillberepresentedbyavectorofbits(soambiguityoftheuseofmforamessageindexwiththenewlyintroducedinterpretationofmasatimeindexwillthennotoccur).In convolutional codes, the k-dimensional binary vector sequence of input bits, um= [uk,muk1,m... u1,m]producesann-dimensionalvectorsequenceofoutputbitsvm= [vn,mvn1,m... v1,m]. Theoutputvec-tor may then be applied to a modulator that produces the N-dimensional channel input vector xm. Thestudyofconvolutionalcodesfocusesontherelationshipbetweenumandvm. Therelationshipbetweenvm and xm for the AWGN is simple binary translation to the bipolar (2 PAM) modulated waveform withamplitude _cx. Notethatthevectoru(D)becomesageneralizationofthemessagesequencem(D),whilev(D)isavectoressentiallysuitedtotransmissiononaBSCwhilex(D)issuitedtotransmissionontheAWGN.TheconvolutionalcodeisusuallydescribedbyitsgeneratorG(D):Denition10.1.4(ConvolutionalCode) Thegeneratormatrix, G(D)of aconvolu-tional codecanbeanyk nmatrixwithentriesinFr(D)andrankk. Theconvolutionalcode, C(G), isthesetof all n-dimensional vectorsequences v(D)thatcanbeformedbypremultiplyingG(D)byanyk-dimensionalvectorsequence,u(D),whosecomponentpolyno-mialsarecausal andbinary. Thatis, thecodeC(G)isthesetof n-dimensional sequences,whose components are inFr(D), that fall inthe subspace spannedbythe rows of G(D).Mathematically,C(G)= v(D) [ v(D) = u(D)G(D) , u(D) Fr(D) . (10.5)Thecoderateforaconvolutional coderisdenedasr = k/n = b.Two convolutional encoders that correspond to generators G(D) and Gt(D) are said to be equivalentiftheygeneratethesamecode,thatisC(G) = C(Gt).149Lemma10.1.1(EquivalenceofConvolutionalCodes) Two convolutional codes are equiv-alentifandonlyifthereexistsaninvertiblek kmatrix,A,ofpolynomialsinFr(D)suchthatGt(D) = AG(D).TherowsofG(D)spanak-dimensionalsubspaceofFr(D). Any(n k) nmatrixthatspanstheorthogonal complementoftherowsofG(D)isknownasaparitymatrixfortheconvolutional code,thatis:Denition10.1.5(ParityMatrix) An (nk)n matrix of rank (nk), H(D), is knownastheparitymatrixforacodeifforanycodewordv(D),v(D)H(D) = 0(where denotestransposeinthiscase).Analternativedenitionof theconvolutional codeis thentheset of n-dimensional sequences inFr(D)ndescribedbyC(G) = v(D) [ v(D)H(D) = 0 . (10.6)NoteG(D)H(D) = 0. When(n k) < k,theparitymatrixisamorecompactdescriptionofthecode.H(D)alsodescribesaconvolutional codeifitisusedasageneratorthiscodeisformallycalledthedual codetothatformedbyG(D). Allcodewordsinthedualcodeareobviouslyorthogonaltothoseintheoriginalcode,andthetwocodestogetherspann-space.The separation between codewords in a convolutional code is described by their Hamming distance:Denition10.1.6(HammingDistance) TheHammingDistance, dH (v(D), vt(D)),betweentwosequences,v(D)andvt(D),isthenumberofbitpositionsinwhichtheydier.Similarlythe Hammingweight wH(v(D)) is denedas the Hamming distance betweenthecodewordandthezerosequence, wH(v(D))=dH(v(D), 0). Equivalently, theHammingweightisthenumberofonesinthecodeword.Denition10.1.7(SystematicEncoder) A systematic convolutional encoder has vni(D) =uki(D)fori = 0, ..., k 1.Equivalently, thesystematicencoder has thepropertythat all theinputs aredirectlypassedtotheoutput, withtheremainingn koutputbitsbeingreservedasparitybits. Onecanalwaysensurethatacoderwiththispropertysatisesourabovedenitionbyproperlylabelingtheoutputbits.A common measure of the complexity of implementation of the convolutional code is the constraintlength. In order to dene the constraint length precisely, one can transform any generator G(D) whoseentriesarenotinF[D] intoanequivalentgeneratorwithentriesinF[D] bymultiplyingeveryrowbytheleastcommonmultipleofthedenominatorpolynomialsintheoriginalgeneratorG(D),callit(D)(A = (D)IkinLemma10.1.1).Denition10.1.8(ConstraintLength) Theconstraint length, , of aconvolutionalencoder G(D) with entries in F[D] is log2of the number of states in the encoder;equivalentlyitisthenumberofDip-ops(ordelayelements)intheobviousrealization(bymeansofaFIRlter).If i(i=1, ..., k)isthedegreeorconstraintlengthof theithrowof G(D)(thatisthemaximumdegreeofthenpolynomialsintheithrowofG(D)),thenintheobviousrealizationofG(D)=k

i=1i. (10.7)Thecomplexityofaconvolutional codemeasuresthecomplexityofimplementationoverall possibleequivalentencoders:Denition10.1.9(MinimalEncoder) Thecomplexityofaconvolutionalcode,C,isthe minimum constraint length over all equivalent encoders G(D) such that C= C(G). Anencoderissaidtobeminimalifthecomplexityequalstheconstraintlength,= .Aminimal encoderwithfeedbackhasarealizationwiththenumberof delayelementsequaltothecomplexity.150!"#% kmkxmessage ku1 ! ku2 ! kukv , 2kv , 12Tky! !!% Figure10.2: 4-stateencoderexample.The followingis a(brute-force) algorithmtoconstruct aminimal encoder Gmin(D) for agivengeneratorG(D). ThisalgorithmwasgivenbyPiretinhisrecenttextonConvolutionalCodes:1. Construct a list of all possible n-vectors whose entries are polynomials of degree or less,where istheconstraintlengthofG(D). Sortthislistinorderofnondecreasingdegree,sothatthezerosvectorisrstonthelist.2. Deletefromthislistall n-vectorsthatarenotcodewordsequencesinC(G). (Totestwhetheragivenrowofnpolynomialsisacodewordsequence,itsucestotestwhetherall(k +1) (k +1)determinantsvanishwhenthek nmatrixG(D)isaugmentedbythatrow.)3. Deletefromthisremaininglistallcodewordsthatarelinearcombinationsofpreviouscodewords.Thenal list shouldhavekcodewords init that canbetakenas therows of Gmin(D) andtheconstraintlengthmustbebecausetheaboveprocedureselectedthosecodewordswiththesmallestdegrees. Theinitial listof codewordsshouldhave2n(+1)codewords. Awell-writtenprogramwouldcombine steps 2 and 3 above and would stop on step 2 after generating k independent codeword sequences.Thiscombinedstepwouldhavetosearchsomewherebetween2nand2n(+1)codewords, becausetheminimumnumberofcodewordssearchedwouldbe2nifatleastonecodewordofdegreeexists,andthemaximumwouldbe2n(+1) 1ifnocodewordsofdegree + 1exist. Amoreecientmethodforgeneratingtheminimalencoderinvolvestheconceptofbasicencoders(seeAppendixA).10.1.3 ExamplesEXAMPLE10.1.1(4-stateOptimalRate1/2Code) The previous example of a con-volutionalencoderofSection8.1isrepeatedinFigure10.2. Therearek=1inputbitandn=2outputbits, sothatthecoderateisb=r=k/n=1/2. Theinput/outputrelationsarev2(D) = (1 +D +D2)u1(D) (10.8)v1(D) = (1 +D2)u1(D) (10.9)Thus,G(D) =_ 1 +D +D21 +D2. (10.10)151! ! !DD D

r = 2 3

u 1 # m

u 1# m- 1

u1# m -$ v1 # m v 2# m

u 1 ( D )

v1% D &

v 2 %D & u 2 ( D )

u 2 # m

u 2 # m- 1

v 3 %D & G(D) =1 D 0D21 D!"###$%&&&Figure10.3: 8-stateencoderexample.Theconstraintlengthis= 2. TheparitymatrixisH(D) =_1 +D21 +D +D2. (10.11)ThecodegeneratedbyG(D) =_11 +D21 +D +D2_, (10.12)hasasystematicencoder;thiscodealsohasthesameparitymatrix,meaningthetwocodesarethesame,eventhoughthemappingsfrominputbitstothesamesetofcodewordsmaybedierent.EXAMPLE10.1.2(8-stateUngerboeckCode) Another example appears in Figure 10.3.Thisencoderhasr = 2/3,andG(D) =_1 D 0D21 D_. (10.13)H(D)mustsatisfyG(D)H(D) = 0,or0 = h3 +h2 D (10.14)0 = h3 D2+h2 +Dh1(10.15)andthath3= D2,h2= D,andh1= 1 +D3isasolution. Thus,H(D) =_D2D 1 +D3. (10.16)EXAMPLE10.1.3(8-stateUngerboeckCodewithFeedback) A nal example illus-trating the use of feedback in the encoder appears in Figure 10.4. This encoder is rate r = 3/4152! ! D DD

r = 4

u 1 ( D )

v1! D #

v 2 !D #

u 2 ( D )

v 3 !D #

v 4 !D # u 3 ( D ) ( )!!!!"#$$$$%&=++!""!""" #11201000100013 ! Figure10.4: 8-StateEncoderExample,withFeedback1530 3 2 1 3 0 1 2 00 01 10 11 00 01 10 11 0 3 3 0 2 1 1 2 ( )! !" ", 1 , 2( )1 2 ! ! ! !" "2 = !21= = r bFigure10.5: Trellisfor4-stateencoderexample.andhasG(D) =__1 0 0 00 1 0D21+D30 0 1D1+D3__. (10.17)Forthis8-statefeedbackexample,letusignorethetrivialpass-throughbitu3= v4. Then,G(D)becomesG(D) =_1 0D21+D30 1D1+D3_, (10.18)forwhich0 = h3 +h1

D21 +D3(10.19)0 = h2 +h1

D1 +D3(10.20)andthath3= D2,h2= D,andh1= 1 +D3isasolution. Thus,H(D) =_D2D 1 +D3, (10.21)and(ignoringu3= v4)Example10.1.2andFig.10.4havethesamecode!10.1.4 TrellisDiagramsforConvolutionalCodesTrellis diagrams can always be used to describe convolutional codes. The trellis diagram in Figure 10.5 isrepeated from Section 8.1. The state in the diagram is represented by the ordered pair (u1,m2, u1,m1).Theoutputsaredenotedinmod-4arithmeticasthe mod-4valueforthevectorofbitsv,withthemost1540246 1357 2064 3175 4602 5713 6420 7531 0246 4602 2064 6420 1357 5713 3175 7531 000 001 010 011 101 110 100 111

v3! m ! v2! m ! v1! m ()

# $%&' = u 2! m u 1 ! m() (')'* = +u 1! m - 2 ! u 2! m -1 !u 1! m - 1 , 00 01 10 11 ! Figure10.6: Trellisfor8-stateencoderexampleofFigure10.3.signicantbitcorrespondingtotheleft-mostbit, vn,m. Ontheleft, themod-4valuewrittenclosertothetrelliscorrespondstothelowestbranchemanatingfromthestateitlabels, whilethemod-4valuewrittenfurthestfromthetrelliscorrespondstothehighestbranchemanatingfromthestateitlabels.Thelabelingisvice-versaontheright.ThetrellisdiagraminFigure10.6isforthecircuitinFigure10.3. Thestateinthediagramisrep-resentedbytheorderedtriple(u1,m2, u2,m1, u2,m1). Theoutputsaredenotedinoctalarithmeticasthemod-8valueforthevectorof bitsv, withthemostsignicantbitagaincorrespondingtotheleft-mostbit,vn,m.10.1.5 ErrorprobabilitiesandPerformanceSection 9.2 generally investigated performance of codes designed with sequential encoders (or equivalentlypartial responsesystems wherechannel ISI was viewedas absorbedintoasequential encoder). Forconvolutionalcodes,theyaretypicallyusedoneitheranAWGNoraBSC.FortheAWGN,asalways,theperformanceisgivenbyPeNeQ_dmin2_ =NeQ__dfree SNR_(10.22)PbNbbQ__dfree SNR_. (10.23)155FortheBSC,theequivalentrelationsarePeNe [4p(1 p)]|d2|(10.24)PbNbb[4p(1 p)]|d2|. (10.25)15610.2 ConvolutionalCodingTablesandDecoderComplexityThissectiondescribesimplementation,complexity,andenumeratestablesofconvolutionalcodes.10.2.1 ImplementationsThere are two basic implementations of convolutional codes that nd most use in practice: feedback freeimplementationsandsystematicimplementations(possiblywithfeedback). Thissubsectionillustrateshow to convert a generator or a parity description of the convolutional code into these implementations.There are actually a large number of implementations of any particular code with only the exact mappingof inputbitsequencestothesamesetof outputcodewordsthatdierbetweentheimplementations.Whenfeedbackisuseditisalwayspossibletodetermineasystematicimplementationwithb=kofthebits directlypassedtotheoutput. Systematicimplementations arenot always possiblewithoutfeedback.Generallyspeakingthedirectrealizationwithoutfeedbackisoftenenumeratedintablesofcodes,asappear in Subsection 10.2.4. For instance the rate 1/2 code G(D) = [1+D+D21+D2] is an example ofthecodewithmaximumminimum-distanceforrate1/2with4states. ThedirectrealizationisobviousandwasrepeatedinSection10.1, Figure10.2. Toconvertthisencodertoasystematicrealization, anewdescriptionoftheinputmaybewrittenasut(D) =11 +D +D2u(D) . (10.26)Clearlyut(D)cantakeonallpossiblecausalsequences,justascanu(D)andthetwoinputsaresimplyarelabelingoftherelationshipofinputsequencestooutputcodesequences. Thus, thecodewordsaregeneratedbyvt(D) = ut(D)G(D) = u(D)11 +D +D2G(D) = u(D)_11 +D21 +D +D2_. (10.27)Thenewmatrixontherightaboveisanewgeneratorforthesamecode(thesamesetofcodewords).Thisnewgeneratororencoderissystematic,butalsohasfeedback. Anotherwaytogeneratethissameencoderistorealizethatforanysystematicencoder,onemaywriteGsys= [Ikh(D)] (10.28)whereh(D)denestheparitymatrixthroughHsys= [hT(D) Ink] (10.29)andthesuperscriptof Tdenotestransposetopreventconfusionwithprimehere. Thusforraten1ncodes, theparitymatrixcanoftenbeeasilyconvertedtothisformbysimplydividingall elementsbythelast. Then(10.28)denesthesystematicrealization. Thusforthe4-stateexampleh(D) =1+D21+D+D2andthecircuitrealizationwithtwoDmemoryelementsandfeedbackappearsinFigure10.7.Moregenerallyforaconvolutionalcode,thesystematicrealizationisfoundbyndingtheinverseoftherstkcolumnsofthematrixG(D) = [G1:k(D) Gk+1:n(D)] (10.30)andpremultiplyingG(D)byG11:ktogetGsys= G11:kG =_IG11:kGk+1:n, (10.31)wheretheargumentof(D)hasbeendroppedfornotationalsimplication. Theinverseisimplementedwithmodulotwoarithmeticandof coursetherstkcolumnsmustformaninvertiblematrixforthisconversion to work. For the codes that this chapter lists in tables,such invertibility is guaranteed. Suchencoders can be replaced by other encoders for the same code that are better according to procedures in157! ku kv , 2kv , 1# ! ## Figure10.7: Systematicrealizationof4-stateconvolutionalcode.Appendix B. However, this text does not emphasize poor codes because most designers are well served tojust use the codes in the tables. Premultiplication by the inverse does not change the codewords becausethisamountstoreplacingrowsofthegeneratorbylinearcombinationsoftherowsofthematrixinaninvertible manner. Since the set of codewords is all possible combinations of the rows of G(D), this set issimplyreindexedintermsofmappingstoallpossibleinputsequencesbythematrixpremultiplication.Asanexample,theprevious8-statecodeinExample10.3hasthegeneratorG(D) =_1 D 0D21 D_(10.32)andpremultiplicationbyG11:2(D) =11 +D3 _1 DD21_(10.33)leavesG11:2(D)G(D) =_1 0D21+D30 1D1+D3_(10.34)andcreatesthesystematicencoder.Denition10.2.1(Catastrophicencoder) Acatastrophicencoderisoneforwhichat least one codeword with nite Hamming weight corresponds to an input of innite Hammingweight.Sincethesetof all possiblecodewordsisalsothesetof all possibleerrorevents, thisisthesameassayinganitenumberof decodingerrorsinasequencecouldleadtoaninnitenumberof inputbiterrors, clearlyacatastrophicevent. Catastrophicimplementationsarethusavoided. Therearemanytestsforcatastrophiccodes,butonewillsucehere:Lemma10.2.1(CatastrophicEncoderTest) An encoder is non-catastrophic if and onlyifthegreatestcommondivisorofthedeterminantsofall thek ksubmatricesofG(D)isanonnegativepowerofDthatisDfor 0.TheproofisinAppendixB.Anon-catastrophicencoderalwaysexistsforanycode, andalltheencoderslistedinthetablesofthissectionarenoncatastrophic. Trivially,asystematicencodercanneverbecatastrophic.Itisalsopossibletondaminimalencoderforanycode,whichwillbenoncatastrophic,andthentoconvertthatintoasystematicencoderthroughthemethodabove. Thatsystematicencoderwillalsobeminimalandthusnoncatastrophic. Forthisreason,mostconvolutionalcodesareimplementedwithsystematic encoders and minimal complexity. However, the tables of this section allow both the feedbackfreeandsystematicrealizationstobereadilyimplemented.15810.2.2 CodingGainforConvolutionalCodesIn Volume I and also again here in Volume II, coding gain has been dened as a comparison of distance-to-energyfortwosystems, acodedsystemandanuncodedsystem. Thismeasureof performanceisactually the most fair - from Chapter 1, one recalls that fair comparisons of two systems should evaluatethe3parameters b,cx,andPe. Anytwoofthe3canbexedandtheothercompared. Convolutionalcodinggainessentiallyxesbandcxandlooksatperformanceintermsof dmin, whichispresumeddirectlyrelatedtoPe.Unfortunately in convolutional codes, b is not easily xed and so the previous measure of coding gainis not then applicable. The designer thus then must look more carefully and understand his criteria andcomparison. Therearetwomethodsforcomparison: bandwidthexpansionanddata-ratereduction.MethodOne-BandwidthExpansionIn this criteria, data rate R = b/T, power P= cx/T, and symbol period Tare held xed while bandwidthisallowedtoincrease(withn)andperformancecompared. Thiscomparisonisthussomewhatunfairinthatitpresumesmoredimensionscanbeallocated(i.e., morebandwidthisavailablewhenmorethanlikelyitisnotavailable)viabandwidthexpansion. Whensodoing,thedesignerthensimplycomparesdmin values for convolutionally coded systems with 2-level PAM at the same data rate. The convolutionalcodesystemthushas1/bmorebandwidththantheuncodedsystem, andatxedpowerandT, thismeansthatcxisreducedtobcx. Thusthecodedminimumdistanceisthendfreebcx. Theratioofcodedtouncodeddistanceisthenjust= 10 log10_bdfree_. (10.35)This quantity is then listed in tables as the coding gain for the convolutional code, even though it impliesabandwidthincrease. Theprobabilityofcodeword/sequenceerrorisPe Ne Q__dfree

bSNRuncoded_ = Ne Q__2 cbA0_1/2_(10.36)= Ne Q__dfree SNRcoded_. (10.37)MethodTwo-Data-RateReductionThiscriteriaisperhapsmoreapplicableinpractice, buttheargumentthatleadstothesamemeasureof codinggainisalittlehardertofollow. ThedesignerxespowerPandbandwidthtoW(positivefrequenciesonly). Forthecodedsystem,thexedbandwidththenleadstoaxedvaluefor1/(bT),orequivalently a data rate reduction by a factor of b to Rcode= b2Wwith respect to uncoded 2-level PAM,whichwouldhaveRcode=2W. Thesquareddistancedoesincreasetodfree cxforthecodedsystem,but could have used a lower-speed uncoded system with 1/b more energy per dimension for 2-level PAMtransmission. Thustheratioofsquareddistanceimprovementisstill bdfree,thecodinggain.10.2.3 BinarySymmetricChannelErrorProbabilityForthebinarysymmetricchannel,thesimplestapproximationtoerrorprobabilityisrepeatedhereasPe N3 [4p(1 p)]_dfree2_. (10.38)AslightlymorecomplicatedexpressionfromSection9.2isPe

d=dfreea(d)__4p(1 p)_d, (10.39)wherea(d)isthenumberoferroreventsofweightd. ExpressionsforbiterrorprobabilityalsooccurinSeciton9.2.1592g11(D) g12(D) dfreef(dB) NeN1N2NbLD4 7 5 5 2.5 3.98 1 2 4 1 38 17 13 6 3 4.77 1 3 5 2 516 23 35 7 3.5 5.44 2 3 4 4 8(GSM)16 31 33 7 3.5 5.44 2 4 6 4 732 77 51 8 4 6.02 2 3 8 4 864 163 135 10 5 6.99 12 0 53 46 16(802.11a)64 155 117 10 5 6.99 11 0 38 36 16(802.11b)64 133 175 9 4.5 6.53 1 6 11 3 9128 323 275 10 5 6.99 1 6 13 6 14256 457 755 12 6 7.78 10 9 30 40 18(IS-95)256 657 435 12 6 7.78 11 0 50 33 16512 1337 1475 12 6 7.78 1 8 8 2 111024 2457 2355 14 7 8.45 19 0 80 82 222048 6133 5745 14 7 8.45 1 10 25 4 194096 17663 11271 15 7.5 8.75 2 10 29 6 188192 26651 36477 16 8 9.0 5 15 21 26 2816384 46253 77361 17 8.5 9.29 3 16 44 17 2732768 114727 176121 18 9 9.54 5 15 45 26 3765536 330747 207225 19 9.5 9.78 9 16 48 55 33131072 507517 654315 20 10 10 6 31 58 30 27Table10.1: Rate1/2MaximumFreeDistanceCodesProbabiltyoferrorwithcodinggainandenergyperbitexpressionFromsection10.1,theprobabilityofsymbolerrorisgivenbyPe Ne Q_dmin2_ =NeQ__dfree SNR_(10.40)Explicit inclusion of coding gain is tacit because the SNR reduces as b. An alternate expression is to useenergyperbit cb=LxbsothatPe Ne Q__dfree

bn cb2_ =Ne Q(2 cbA0) (10.41)whichincludes thecoding gain= bdfreedirectlyintheformulae andviewsthe quantityLbA0asxed.Suchaquantityisreasonablyinterprettedasxed(ignoringbandwidthexpansion)ONLYinthestudyofconvolutionalcodeswhen b < 1.10.2.4 TablesofConvolutionalCodesThissubsectionlistsseveral of themostpopularconvolutional codesintabularformat. ThesetablesarebasedonaveryexhaustivesetfoundbyR. JohannessonK. SigangirovintheirtextFundamentalsof Convolutional Coding(IEEEPress, 1999), butinwhichwefoundmanyerrors. UsingGinisdmincalculationprogram, theyhavebeencorrected. Thegeneratorsarespeciedinoctal withanentryof155forg11(D)inTable10.1forthe64-statecodecorrespondingtog11(D) = D6+D5+D3+D2+ 1.EXAMPLE10.2.1(Rate1/2example) The8-staterate1/2codeinthetableshasagenerator givenby[17 13]. Thegenerator polynomial isthus G(D) =[D3+ D2+ D+1 D3+D +1]. Bothfeedback-freeandsystematic-with-feedbackencodersappearinFigure10.8.1602g11(D) g12(D) g13(D) dfreef(dB) NeN1N2NbLD4 5 7 7 8 2.67 4.26 2 0 5 3 48 15 13 17 10 3.33 5.23 3 0 2 6 616 25 33 37 12 4 6.02 5 0 3 12 832 71 65 57 13 4.33 6.37 1 3 6 1 664 171 165 133 15 5 6.99 3 3 6 7 11128 365 353 227 16 5.33 7.27 1 5 2 2 10256 561 325 747 17 5.67 7.53 1 2 6 1 9512 1735 1063 1257 20 6.67 8.24 7 0 19 19 151024 3645 2133 3347 21 7 8.45 4 1 4 12 162048 6531 5615 7523 22 7.33 8.65 3 0 9 8 154096 13471 15275 10637 24 8.00 9.03 2 8 10 6 178192 32671 27643 22617 26 8.67 9.38 7 0 23 19 2016384 47371 51255 74263 27 9 9.54 6 4 6 22 2432768 151711 167263 134337 28 9.33 9.70 1 6 5 2 1765536 166255 321143 227277 28 10 10 1 9 20 4 22Table10.2: Rate1/3MaximumFreeDistanceCodes2g11(D) g12(D) g13(D) g14(D) dfreef(dB) NeN1N2NbLD4 7 7 7 5 10 2.5 3.98 1 1 1 2 48 17 15 13 13 13 3.25 5.12 2 1 0 4 616 37 35 33 25 16 4 6.02 4 0 2 8 732 73 65 57 47 18 4.5 6.53 3 0 5 6 864 163 147 135 135 20 5 6.99 10 0 0 37 16128 367 323 275 271 22 5.5 7.40 1 4 3 2 9256 751 575 633 627 24 6.0 7.78 1 3 4 2 10512 0671 1755 1353 1047 26 6.5 8.13 3 0 4 6 121024 3321 2365 3643 2277 28 7.0 8.45 4 0 5 9 162048 7221 7745 5223 6277 30 7.5 8.75 4 0 4 9 154096 15531 17435 05133 17627 32 8 9.03 4 3 6 13 178192 23551 25075 26713 37467 34 8.5 9.29 1 0 11 3 1816384 66371 50575 56533 51447 37 9.25 9.66 3 5 6 7 1932768 176151 123175 135233 156627 39 9.75 9.89 5 7 10 17 2165536 247631 264335 235433 311727 41 10.25 10.1 3 7 7 7 20Table10.3: Rate1/4MaximumFreeDistanceCodes1612h3(D) h2(D) h1(D) dfreef(dB) NeN1N28 17 15 13 4 2.667 4.26 1 5 2416 05 23 27 5 3.33 5.22 7 23 5932 53 51 65 6 4.00 6.02 9 19 8064 121 113 137 6 4.0 6.02 1 17 47128 271 257 265 7 4.67 6.69 6 26 105256 563 601 475 8 5.33 7.27 8 40 157512 1405 1631 1333 8 5.33 7.27 1 20 751024 1347 3641 3415 9 6 7.78 9 45 1662048 5575 7377 4033 10 6.67 8.24 29 0 4734096 14107 13125 11203 10 6.67 8.24 4 34 1278192 29535 31637 27773 11 7.33 8.65 9 72 22216384 66147 41265 57443 12 8 9.02 58 0 84732768 100033 167575 155377 12 8 9.02 25 0 46265536 353431 300007 267063 12 8 9.02 7 26 120Table10.4: Rate2/3MaximumFreeDistanceCodes2h4(D) h3(D) h2(D) h1(D) dfreef(dB) NeN1N232 45 71 63 51 5 3.75 5.74 9 47 21864 151 121 177 111 6 4.5 6.53 30 112 640Table10.5: Rate3/4MaximumFreeDistanceCodes!"#% ) (! "kx& & & ku1 ! ku2 ! kukv , 2kv , 12Tky% & & 3 ! ku% % ku&% && % & ) (! "!"# kxkv , 2kv , 1 2TkyFigure10.8: 8-Staterate-1/2equivalentencodersfromTables162! ku#! ## ! # ) (2 ! "$%& kxkv , 2kv , 132Tkykv , 3! ) (1 ! "Figure10.9: 16-Staterate-2/3equivalentencodersfromTablesEXAMPLE10.2.2(Rate2/3example) The16-staterate2/3codeinthetableshasaparity matrix given by [05 23 27]. The parity polynomial is thus H(D) = [D2+1 D4+D2+DD4+ D2+ D + 1]. Equivalently,Hsys(D) = [D2+1D4+D2+D+1D4+D2+DD4+D2+D+11]AsystematicgeneratoristhenG(D) =_1 0D2+1D4+D2+D+10 1D4+D2+DD4+D2+D+1_. (10.42)Thesystematic-with-feedbackencoderappearsinFigure10.9.10.2.5 ComplexityCode complexity is measured by the number of adds and compares that are executed per symbol periodbytheViterbiMLdecoder. ThisnumberisalwaysND= 2_2k+ 2k1_(10.43)forconvolutionalcodes. Thisisagoodrelativemeasureofcomplexityforcomparisons, butnotneces-sarilyanaccuratecountofinstructionsorgatesinanactualimplementation.10.2.6 ForcingaKnownEndingStateFor encoders without feedback, the input u(D) can be set to zero for symbol periods. Essentially, thisforcesaknownreturntotheall zerosstate. Theencoderoutputsforthesesymbol periodsaresentthroughthechannelandthedecoderknowstheycorrespondtozeros. Thus,asequencedecoderwouldthenknowtondthebestpathintoonlytheallzerosstateattheendofsomepacketoftransmission.If the packet length is much larger than , then the extra symbols corresponding to no inputs constituteasmallfractionofthechannelbandwidth. Insomesense, symbolsattheendofthepacketbeforetheintentionalzero-stungarethentreatedatleastequallytosymbolsearlierwithinthepacket.TailBitingThestateofarecursive(i.e.,havingfeedback)encodercanalsobeforcedtozerobywhatiscalledtailbyting.Tail byting is the nite-eld equivalent of providing an input to a circuit that zeros the poles.Figure10.10illustratesthebasicoperation. Aswitchisusedinthefeedbacksection. AfterapacketofL input symbols has been input to the encoder with the switch in the upper position,then the switch isloweredandthefeedbackelementsaresuccessivelyzeroed. Theoutputofthefeedbacksectionisinputto the lter (which because of the exclusive or operation zeros the input to the rst delay element). Thisdelay-lineoutputvalueisalsotransmittedasadditional inputsymbols. ThestateisagainforcedtozerosothatadecodercanexploitthisadditionalknowledgeoftheendingstateforthepacketofL +transmittedsymbols.163! ku#! ## ! # kv , 1kv , 2! ku , 1 Figure10.10: IllustrationofTailBiting.164! m 8lnary Lncoder C k blLs b-k blLs CoseL SelecL (CS) Slgnal SelecL (SS)

k + rC blLs (sequence ln C)

one of 2 k +r C coseLs of " '

""'

one of 2 b +r C polnLs ln " Figure10.11: Thecoset-codeencoder10.3 CosetCodes,Lattices,andPartitionsThegeneralcoset-codeencoderisshowninFigure10.11. Thisencoderconsistsof3majorcomponents:thebinaryencoder(G), thecoset select(CS), andthesignal select(SS). Theencoderoutput,xm, isanN-dimensional vectorsequenceof points; misasymbol-timeindex. Each(N-dimensional)symbol of thissequenceischosenfromanN-dimensional constellation. Thesequencesof xmarethecodewordsx(D)=

mxmDm. Thissignal constellationconsistsof2b+rGsignal pointsinsomecosetof anN-dimensional real lattice, (foradenitionof alatticeanditscosets, seeAppendixC). Thebasic idea in designing a coset code is to select carefully N-dimensional sequences that are separated bya large minimum distance. The signal constellation contains 2k+rGsubsets (cosets) of the constellationthateachcontain2bkpoints. Agoodtrelliscodeisdesignedbycarefullyselectingsequencesofcosetsthat will lead to the desired increase in separation. The vector sequence xmis converted by modulationtoacontinuous-timewaveformaccordingtothetechniquesdiscussedinChapter1.Atanytimem,therearebinputbitstotheencoder,ofwhichkareinputtoaconventionalbinaryencoder (convolutional or block), which produces n = k +rGoutput bits, with rGspecifying the numberof redundant bits producedbythe encoder (rG=n kinSections 10.1and10.2). The quantity rG=rG/Nis calledthe normalizedredundancyof the convolutional encoder inthe coset-codeencoder. ThequantitykG= k/Niscalledtheinformativityoftheconvolutionalencoderinthecoset-codeencoder. Thek + rGoutputbitsof thebinaryencoderspecifyoneof 2k+rGdisjointsubsets(orcosets)intowhichthesignal constellationhasbeendivided. Thissubsetselectionisperformedbythecosetselect. Thecurrentoutputsignal pointxmattimemisselectedbythesignal selectfromthe2bkremainingpointsinthatcosetthatiscurrentlyselectedbythecosetselect. ThesetofallpossiblesequencesthatcanbegeneratedbythecosetencoderiscalledthecosetcodeandisdenotedbyC.InFigure10.11, thesublatticetispresumedtopartitionthelattice, written[t, into2k+rGcosets of t, where eachcoset has all of its points containedwithinthe original lattice . Suchapartitioningalwaysaccompaniesthespecicationofacosetcode.Denition10.3.1(CosetPartitioning[t) A cosetpartitioning is a partition of theLatticeinto [[t[ (calledtheorderof thepartition)cosetsof asublatticetsuchthateachpointintheoriginal latticeiscontainedinone,andonlyone,cosetofthesublatticet.ThecosetselectinFigure10.11thenacceptsthek + rGbitsfromGasinputandoutputsacor-respondingindexspecifyingoneof thecosets of tinthecoset partitioning[t. Thereare2b+rGconstellationpointschosenfromthelattice, andeachisinoneof the2k+rGcosets. Thereareanequalnumber,2bk,ofconstellationpointsineachcoset. Thesignalselectacceptsb kuncodedinputbitsandselectswhichof the2bkpointsinthecosetof tspeciedbythecosetselectthatwill betransmittedasthemodulatedvectorxm.165If the encoder G is a convolutional encoder, then the set of all possible transmitted sequences x(D)isaTrellisCode, andif Gisablockencoder, thesetof N-dimensional vectorsisaLatticeCode.So,bothtrelliscodesandlatticecodesarecosetcodes.10.3.1 GainofCosetCodesThereareseveral important concepts inevaluatingtheperformanceof acoset code, but thegainisinitiallythemostimportant.Thefundamental gainwill betakeninVolumeIItoalwaysbewithrespecttoauncodedsystem, x,thatusespointsontheN-dimensionalintegerlattice(denotedZN). Itiseasy(andoftenharmless,but not always) to forget the dierence between 1xand 12/N(). 1xis the volume of the constellationandisequaltothenumberofpointsintheconstellationtimes 12/N(x). Forcodinggaincalculationswherethetwocomparedsystems havethesamenumber of points, this dierenceis inconsequential.However, intrellis codes, thetwoaredierent becausethecodedsystemoftenhas extraredundantpoints in its constellation. For the uncoded reference ZNlattice, 1(ZN) = 1 and dmin(ZN) = 1, so thatthefundamentalgainofacosetcodereducestof=d2min(x)\2/Nxd2min(x)\x2/N=d2min(x)22(b+ rG)\2/N()d2min(x)22b\2/N()=d2min(C)22 rG\2/N()11=d2min(C)1()2/N22 rG(10.44)Thequantity rG=rGNisthenormalizedredundancyoftheencoderG. ThegainfindBisf= 10log10_d2min(C)1()2/N22 rG_(10.45)= 20 log10_dmin(C)1()1/N2 rG_(10.46)Theredundancyof theoverall coset coderequires theconcept of theredundancyof theoriginallattice.Denition10.3.2(RedundancyofaLattice) The redundancy of a lattice is denedbyrsuchthat1() = 2r= 2N r, (10.47)or r=log2 (1()) bits per symbol. Thequantity r=r/Nis calledthenormalizedredundancyofthelattice,whichismeasuredinbits/dimension.Then,thefundamentalcodinggainofacosetcodeisf=d2min(C)22( rG+ r)=d2min(C)22 rC(10.48)where rC= rG + r. (10.49)Equation(10.48)isoftenusedasameasureof performanceinevaluatingtheperformanceof agiventrelliscode. Goodcosetcodestypicallyhave3dB f 6dB.Shapinggainisalsoimportantinevaluatingtrelliscodes,butisreallyafunctionoftheshapeoftheconstellationusedratherthanthespacingofsequencesoflatticepoints. Theshapinggainisdenedas(againcomparingagainstazero-meantranslateofZNlattice)s= 12/N()22 rGc()/1(22b1)/12=22 rC12c(22b1) , (10.50)166!!! !! # $ !# % !&!! # ' ( ) $ * ' + ,- ( ' )$,.).$., ) $ , .) .$ ., Figure10.12: Partitioning(acosetof)theD2Latticewherec(C)istheenergyperdimensionrequiredforthecosetcode. Usingtheso-calledcontinuousapproximation(whichholdsaccuratelyfor b 3),theshapinggainisoftenapproximatedbys 22 rC 22b12c(). (10.51)ThiscanalsobewrittenindBass 10 log10_12/N()22(b+ rG)12c()_(10.52)= 10 log10_22(b+ rC)12c()_(10.53)This shaping gain has a theoretical maximum of 1.53dB - the best known shaping methods achieve about1.1dB(seeSection10.8).EXAMPLE10.3.1(UngerboecksRate1/23dBTrellisCode) In Figure 10.12, the8AMPM (or 8CR) constellation is subset of a (possibly scaled and translated) version of whatis known as the = D2Lattice that contains [[ = 8 points. The average energy per symbolfor this systemis c =10orc =5. The(translated) sublatticethas acoset, 0thatcontains [0[ = 2points,sothatthereare [[t[ = 4cosetsoftin;theyare0= 0, 4,1= 1, 5,2= 2, 6,and3= 3, 7. These4cosetsareselectedbythetwobitoutputv(D)ofarate1/2convolutionalencoderwithgenerator:G(D) =_1 +D2D. (10.54)ThecorrespondingtrellisandtrellisencoderareshowninFigures10.13and10.14, respec-tively. Theconvolutionalencoder,G,hasrater= 1/2,buttheoverallcoset-codehas b = 2167!! !# #! ## ! ! ! $ ! # ! % ! $ ! ! ! % ! # Figure10.13: Trellisfor4-staterate1/2UngerboeckCode! ! #$% #&'( )*(+' ,+-+.' /),0 ,&123- ,+-+.' /,,0

! 4 4 5/ 40 67 8 4 9 4

*2+ *: ; .*(+'( *:9< 9

*2+ *: 9 # $7 =*&2'( &2 (+-+.'+>.*(+' *: 9 < 9

< 9 9 < 9 Figure10.14: Encoderfor4-staterate1/2UngerboeckCode168bits/2dimensions=1bit/dimension. Theinputbitu1,kpassesthroughG, wherethetwooutputbitsselecttheindexofthedesiredcoset0, 1, 2, or3. Theminimumdistancebetweenthepoints inanyof thesecosets is dmin(t) =2dmin() =42. This distanceisalsothedistancebetweentheparallel transitionsthataretacitinFigure10.13. Figure10.13inherentlyillustrates that anytwo(non-parallel) paths that start andterminateinthesamepairof states, must haveadistancethat isdt=16 + 8 + 16, whichisalwaysgreaterthan42, sothattheparallel transitiondistanceistheminimumdistanceforthiscode. Thisparallel-transitiondistance(normalizedtosquare-rootenergyandignoringcon-stellationshapeeects)is2betterthanthedistancecorrespondingtonoextrabitorjusttransmitting(uncoded)4QAM.Moreprecisely,thecodinggainis=_d2min/cxp_coded_d2min/cxp_uncoded(10.55)This expression evaluates to (for comparison against 4 QAM, which is also ZNand exact forfandscalculations)=1621011/2= 1.6 = 2 dB . (10.56)Thefundamentalcodinggainis(realizingthat rC= r + rG= 1.5 +.5 = 2)f=_d2min22 rC_ =32222= 2 (3dB) . (10.57)Thus,theshapinggainisthens= f= 2 3 = 1dB,whichveriesaccordingtos=222125_221_ =45=-1dB . (10.58)Thiscosetcodeof Figure10.14canbeextendedeasilytothecasewhereb=3, byusingtheconstellationinFigure10.15. Inthiscase, rC=.5= r+ rG=0 + .5,b=1.5, andc=1.25. Thisconstellationcontains16pointsfromthescaledandtranslatedZ2Lattice.The sublattice and its 4 cosets are illustrated by the labels 0,1,2, and 3 in Figure 10.15. ThiscosetcodeusesacircuitalmostidenticaltothatinFigure10.14,exceptthattwobitsentertheSignalSelecttospecifywhichofthe4pointsintheselectedcosetwillbethemodulatedsignal.Theminimumdistanceof thecodedsignal isstill twicethedistancebetweenpointsin.Thefundamentalcodinggainisf=422.5= 2 (3dB). (10.59)Thefundamental gainfwill remainconstant at 3dBif bfurther increases inthesamemanner;howevertheshapinggainswillvaryslightly. Theshapinggaininthiscaseiss=22.512(5/4)_231_ =1415=(-.3dB). (10.60)Thecodinggainof thiscodewithrespectto8CRis(16/5)/(8/5)=3dB. ThiscodeclassisknownasUngerboecks4-stateRate1/2TrellisCode,andisdiscussedfurtherinSection10.4.10.3.2 MappingBySetPartitioningTheexampleofthelastsectionsuggeststhatthebasicpartitioningofthesignalconstellation[tcanbe extended to largervalues of b. The generaldetermination oft,given,is called mapping-by-set-partitioning. Mapping-by-set-partitioningisinstrumental tothedevelopmentandunderstandingofcosetcodes.169!!! !#!! $ % &' () & & & & ( ( ( ' ' ' ) ) ) & & & !$ ' * !! $ % d=1 Figure10.15: Partitioning16QAMPartitioningoftheIntegerLatticeExample 10.3.1 partitioned the 16QAM constellation used to transmit 3 bits of information per symbol.The basic idea is to divide or to partition the original constellation into two equal size parts, which bothhave3dBmoredistancebetweenpointsthantheoriginallattice. Figure10.16repeatsthepartitioningof16QAMeachofthepartitionstogenerate4sublatticeswithdistance6dBbetterthantheoriginalLattice.ThecosetsarelabeledaccordingtoanUngerboeckLabeling:Denition10.3.3(UngerboeckLabelingintwodimensions) AnUngerboecklabelingforaCoset CodeCusestheLSB, v0, of theencoderGoutput tospecifywhichof therst2partitions (B0, v0=0or B1v0=1) contains the selectedcoset of the sublattice t,andthenuses v1tospecifywhichof thenext level partitions (C0,C2,C1,C3) contains theselectedcosetof thesublattice, andnallywhennecessary, v2isusedtoselectwhichof the3rdlevel partitionsistheselectedcoset of thesublattice. Thislast level contains8cosets,D0,D4,D2,D6,D1,D5,D3,andD7.Theremainingbits, vk+r, ..., vb+r1areusedtoselectthepointwithintheselectedcoset. Theparti-tioning process can be continued for larger constellations, but is not of practical use for two-dimensionalcodes.Inpractice, mappingbyset partitioningis oftenusedfor N=1, N=2, N=4, andN=8.One-dimensional partitioninghalvesaPAMconstellationintosetsof everyotherpoint,realizinga6dBincreaseinintra-partitiondistanceforeachsuchhalving. In4and8dimensions, whichwill beconsidered later,the distance increase is (on the average) 1.5 dB per partition and .75 dB per partition,respectively.170! # =#! # =$ ! $ = $ ! $ = $! $ =# ! $ =# % # % $ & # & '& $& ( ) #* ' "* ' "* Figure10.16: IllustrationofMappingbySetPartitioningfor16QAMConstellation171! = !(0)!(1)!(1) + g0!(2) + g0!(2) + g0 + g1!(2) + g1!(2)Figure10.17: Partitiontree.PartitionTreesandTowersForneysPartitiontreesandTowersarealternativemoremathematical descriptionsofmappingbysetpartitioningandtheUngerboeckLabelingprocess.The partition tree is shown in Figure 10.17. Each bit of the convolutional encoder (G) output is usedtodelineateoneoftwocosetsoftheparentlatticeateachstageinthetree. Theconstantvectorthatisaddedtooneofthepartitionstogettheotheriscalledacosetleader,andismathematicallydenotedasgi. Thespecicationofavectorpointthatrepresentsanycosetintheithstageofthetreeisx = t +k+rG1

i=0vigi= t +v__gk+rG1...g1g0__ = t +vG , (10.61)where G is a generator for the coset code. Usually in practice,a constant oset a is added to all x sothat0(amemberofall lattices)isnotintheconstellation, andtheconstellationhaszeromean. Thenalb kbitswillspecifyanosetfromxthatwillgenerateournalmodulationsymbolvector. Thesetofallpossiblebinarycombinationsofthevectorsgiisoftendenoted[[t]= vG (10.62)Thus, = t + [[t] , (10.63)symbolically, toabbreviatethateverypointincanbe(uniquely)decomposedasthesumofapointintandoneofthecosetleadersintheset[[t],orrecursively(i)= (i+1) +_(i)[(i+1). (10.64)Thereisthusachainrule(0)= (2) +_(1)[(2)+_(0)[(1)(10.65)or_(0)[(2) =_(0)[(1)+_(1)[(2)(10.66)172Figure10.18: Partitiontower.where_(i)[(i+1)= 0, giand_(0)[(2)= 0, g0, g1, g0 +g1. ThisconceptisprobablymostcompactlydescribedbythepartitiontowerofForney,whichisillustratedinFigure10.18.Thepartitionchainisalsocompactlydenotedby(0)(1)(2)(3).... (10.67)EXAMPLE10.3.2(Two-dimensionalintegerlatticepartitioning) The initial lattice(0)isZ2. PartitioningselectseveryotherpointfromthislatticetoformthelatticeD2,whichhas onlythosepoints inZ2that haveevensquarednorms (seeFigures 10.16and10.12). Mathematically,thispartitioningcanbewrittenbyusingtherotationmatrixR2=_1 11 1_(10.68)asD2= R2Z2, (10.69)wherethemultiplicationin(10.69)symbolicallydenotestakingeachpointinZ2andmulti-plyingbytherotationmatrixR2tocreateanewsetofpointsthatwillalsobealattice. D2isalsoasublatticeofZ2andthereforepartitionsZ2intotwosubsets. Thusinpartitioningnotation:Z2[D2. (10.70)TherowvectorthatcanbeaddedtoD2togettheothercosetisg0=[01]. So, Z2[D2withcosetleaders_Z2[D2= [0 0], [0 1]. D2decomposesintotwosublatticesbyagainmultiplyingbytherotationoperatorR22Z2= R2D2. (10.71)Pointsin2Z2havesquarednormsthataremultiplesof4. Then,D22Z2with_D22Z2 =[0 0], [1 1]. Thus,_Z22Z2= [0 0], [0 1], [1 0] [1 1]. Thegeneratorforacosetcodewithconvolutionalcodewords[v1(D) v0(D)]asinputisthenG =_1 10 1_. (10.72)PartitioningcontinuesbysuccessivemultiplicationbyR2: Z2[D2[ 2Z2[2D2[ 4Z2....173! ! #$% #&'( )*(+' ,+-+.' /),0 ,&123- ,+-+.' /,,0

! 4 4 5/ 40 67 8 4 9 4

*2+ *: ; .*(+'( *:9< 9

*2+ *: 9 # $7 =*&2'( &2 (+-+.'+>.*(+' *: 9 < 9

< 9 9 < 9 Figure10.19: Ungerboecks4-staterate1/2TrellisCode10.4 One-andTwo-dimensionalTrellisCodesUngerboeckstwomostfamouscodesareinwideuseandusedforillustrativepurposesinthissection.10.4.1 Rate1/2CodeTheearlier4-staterate1/2codediscussedinSection10.3canbegeneralizedasshowninFigure10.19.TheencodermatrixG = [1 +D2D]canbeequivalentlywrittenasasystematicencoderGsys=_1D1 +D2_, (10.73)which is shown in Figure 10.19. The parity matrix Happears in Figure 10.19 because most trellis codesaremorecompactlyexpressedintermsofH. InthiscoderG=1and2b+1pointsaretakenfromtheLatticecosetZ2+_1212toformtheconstellation,whilethesublatticethatisusedtopartitionist= 2Z2,uponwhichthereare2b1constellationpoints,andthereare4cosetsoftin, [/t[ = 4.Thefundamentalgainremainsatf=_d2min(2Z2)1(Z2)2_/_d2min(Z2)1(Z2)_ =42/11= 2 = 3 dB . (10.74)Shapinggainis againafunctionof theconstellationshape, andis not consideredtobepart of thefundamental gain of any code. For any number of input bits, b, the structure of the code remains mainlythesame,withonlythenumberofpointswithinthecosetsoft= 2Z2increasingwith2b.ThepartitionchainforthisparticularcodeisoftenwrittenZ2[D2[2Z2.10.4.2 Asimplerate2/3TrellisCodeIntherate2/3codeofinterest,theencoderGwillbearate2/3encoder. Theobjectiveistoincreasefundamental gain beyond 3dB, which was the parallel transition distance in the rate 1/2 code of Section10.3. Highergainnecessitatesconstellationpartitioningbyoneadditionallevel/steptoensurethattheparallel transition distance will now be 6dB, as in Figure 10.20. Then, the minimum distance will usually174! # =#! # =$ ! $ = $ ! $ = $! $ =# ! $ =# %# %$ &' &$ &( )#* ' "* ' "* +#+,+'+-+$+.+(+/ ! ' =#! ' =$ ! ' =#! ' =$ ! ' =#! ' =$ ! ' =#! ' =$ '' " * Figure10.20: D-levelpartitioningof16QAM.occurbetweentwolongerlengthsequencesthroughthetrellis, insteadof betweenparallel transitions.The mapping-by-set-partitioningprinciple of the Section10.3.2extends one more level tothe chainZ2[D2[2Z2[2D2, whichisillustratedindetail inFigure10.20fora16SQQAMconstellation. Figure10.21showsatrellisforthesuccessivecosetselectionfromD-levelsetsinFigure??andalsoillustratesanexampleoftheworst-casepathforcomputationofdmin.Theworstcasepathhasdistancedmin=2 + 1 + 2 =5 1. Thus, signal expansionisovermanydimensions, andthusthereisasmallerconstellationexpansionoveranyparticulardimension.However, as determinedinSection10.5.1, more levels of partitioning will be necessaryto increasedistanceontheparalleltransitionsdenedbytanditscosetstoanattractivelevel. Thelowersignalexpansionperdimensionisprobablythemostattractivepractical featureof multi-dimensional codes.Constellationexpansionmakesthecodedsignalsmoresusceptibletochannel nonlinearityandcarrierjitter in QAM. Constellation expansion can also render decision-directed (on a symbol-by-symbol basis)timingandcarrierloops, aswell asthedecision-feedbackequalizer, sub-desirableintheirperformanceduetoincreasedsymbol-by-symbolerrorrates.Multidimensional codes can be attractive because the computation required to implement the ViterbiDetectorisdistributedoveralongertimeinterval,usuallyresultinginaslightcomputationalreduction(onlyslight, asweshallseethatparalleltransitionresolvinginthecomputationofthebranchmetricsbecomesmoredicult). Oneparticularlytroublesomefeatureofmultidimensionaltrelliscodesis,how-ever,apropensitytowardshighnearestneighborcounts,withtheresultantsignicantdecreaseinfto f.Subsection10.5.2introduces afewsimpleexamples of multidimensional trellis codes. Subsection10.5.2 lists the most popular 4 dimensional codes in tabular form, similar to the tables in Section 10.4.3.Subsection10.5.2liststhemostpopular8dimensionalcodes.199A08dminw.r.t. A081New.r.t. A0816B08B18dminw.r.t. B082 1New.r.t. B08132 16B08B28B18B38dminw.r.t.B0822 1 1New.r.t.B0848 64 8 8B08B48B28B68B18B58B38B78dminw.r.t.B082222 1 1 1 1New.r.t.B0816 32 32 32 4 4 4 4C08C48C28C68C18C58C38C78dminw.r.t. C082222 1 1 13New.r.t. C08240 16 16 16 2 2 2 2C88CC8CA8CE8C98CD8CB8CF8dminw.r.t. C082222 1 1 1 1New.r.t. C0816 16 16 16 2 2 2 2Table10.15: Eight-DimensionalPartitioning! m 8lnary Lncoder C k blLs b-k blLs CoseL SelecL (CS) Slgnal SelecL (SS)

k + rC blLs (sequence ln C)

one of 2 k +r C coseLs of " '

""'

one of 2 b +r C polnLs ln " Figure10.31: Thecoset-codeencoder.200

! # $ ! # %

! # & ! # ' Figure10.32: Trellisfor2-state,rate1/2,4Dcode.

! # $ ! # %

! # # ! # & Figure10.33: Trellisfor2-state,rate1/2,4DCode,basedonD4.MultidimensionalTrellisCodeExamplesEXAMPLE10.5.1(2-state,rate1/2,4DCode) The code uses =Z4andt=R4Z4. Thetwo-statetrellis is showninFigure10.32. Thelabels for thevarious subsetsare as indicated in the tables of Section 10.5.1. The minimum distance is given by d2min= 2,and rC= 0 +14. Thus,thefundamentalgainisf=2221/4=2 (1.5 dB) . (10.237)ThisgainisnobetterthantheD4gain, whichrequirednostates. However,Ne(D4)=6,whereasforthiscodeNe=2, sotheeectivegainfortheD4latticecodeisabout1.2dB,whileitisafull1.5dBforthis2-statetrelliscode.EXAMPLE10.5.2(2-state,rate1/2,4DCode,basedonD4) The code uses =D4andt=R4D4. The two-state trellis is showninFigure 10.33. The labels for thevarioussubsetsareasindicatedonthetablesinSection10.5.1. Theminimumdistanceisgivenbyd2min= 4,and rC=14+14.5Thus,thefundamentalgainisf=4221/2= 2 (3.01 dB) . (10.238)This gainis better thantheD4gain, whichrequirednostates. However,Ne=22 (=24+884),sotheeectivegainisabout2.31dBforthiscode.EXAMPLE10.5.3(Weis8-state,rate2/3,4DCode) The code uses =Z4andt=R4D4. The8-statetrellisisshowninFigure10.34. Thelabelsforthevarioussubsetsare as indicated on the tables in Section 10.5.1. The minimum distance is given by d2min= 4,and rC= 0 +14. Thus,thefundamentalgainisf=4221/4= 21.5(4.52 dB) . (10.239)5Notethat r= 1/4for = D4.201!"#$ &'() #$!" ()&' "!$# '&)( $#"! )('& !#"$ #!$" "$!# $"#! &(') (&)' ')&( )'(& !!! !!& !&! !&& &!& &&! &!! &&&

* ! + # * Figure10.34: TrellisforWeis8-state, rate2/3, 4DCode(sameas8-state2dimensional Ungerboecktrellis,exceptbranchindexofistandsforcosetCi4)2020246135720643175460257136420753120643175024613576420753146025713 i!C4iFigure10.35: TrellisforWeis16-state,rate2/3,4Dcode.Thisgainisyetbetter. However,Ne=22 (=24+884), sotheeectivegainisabout3.83dBforthiscode.EXAMPLE10.5.4(Weis16-state,rate2/3,4DCode) Thecodeuses=Z4andt=R4D4. The16-statetrellisisshowninFigure10.35. Theminimumdistanceisgivenbyd2min= 4,and rC=14+ 0. Thus,thefundamentalgainisf=4221/4= 21.5(4.52 dB) . (10.240)Thisgainisthesameasfor8states. However,Ne=6, sotheeectivegainisabout4.2dBforthiscode, whichisbetterthanthe8-statecode. Thiscodeisthemostcommonlyfoundcodeinsystemsthatdousea4-dimensional trelliscodeandhasbeenstandardizedasoneoptionintheCCITTV.fastcodefor28.8Kbps(uncompressed)voicebandmodems(withminormodication,seeSection10.7andalsoforAsymmetricDigitalSubscriberLine(ADSL)transceivers. Thereisa32-state4-dimensional Wei codewithNe=2, sothatitseectivegainisthefull4.52dB.EXAMPLE10.5.5(4-state,rate2/4,8DCode) The code uses =E8andt=R8E8. Thetwo-statetrellisisshowninFigure10.36. Theminimumdistanceisgivenby203

D 8 00 D 8 40 D 8 80 D 8 C 0

D 8 10 D 8 50 D 8 90 D 8 D 0

D 8 20 D 8 60 D 8 A 0 D 8 E 0 D 8 30 D 8 70 D 8 B0 D 8 F 0 Figure10.36: Trellisfor4-state,rate2/4,8DCoded2min= 8,and rC=48+28= .75.6Thus,thefundamentalgainisf=8223/4= 21.5(4.52 dB) . (10.241)Thisgainisbetterthanforanyother4-statecodepresentedsofar. However,Ne= 126 (=240+3(1616)8), sotheeectivegainisabout3.32dBforthiscode, whichisstill betterthanthe4-state2-dimensional3dBcodewitheectivegain3.01dB.EXAMPLE10.5.6(Weis16-state,rate3/4,8DCode) Thecodeuses=Z8andt= E8. The 16-state trellis is shown in Figure 10.37. The labels for the various subsets areasindicatedonthetablesinSection10.5.1. Theminimumdistanceisgivenbyd2min=4,and rC=18+ 0 = .125. Thus,thefundamentalgainisf=4221/8= 21.75(5.27 dB) . (10.242)Thisgainisbetterthanforanyother16-statecodestudied. However,Ne=158, sotheeectivegainisabout3.96dB.Thereisa64-stateWeicodethatisalsof=5.27dB,butwithNe=30, sothat f=4.49dB. This 64-statecode(actuallyadierentiallyphase-invariantmodicationof it)isusedinsomecommercial (private-line)19.2kbpsvoicebandmodems.4DCodeTableTable 10.16 is similar to Tables 10.6 and 10.8, except that it is for 4 dimensional codes. Table 10.16 listsuptorate4/5codes. Theratecanbeinferredfromthenumberofnonzerotermshiinthetable. Thedesignspeciesandtinthe4Dcase,whichalsoappearsinTable10.16.N1andN2arenotshownforthesecodes.N1andN2canbesafelyignoredbecauseanincreaseindmininthetablesby1or2wouldbeverysignicantandthenumbersof nearestneighborswouldhavetobeverylargeonpathswithd > dmininorderfortheirperformancetodominatetheunionboundforPe.8DCodeTableTable 10.17 is similar to Tables 10.6 and 10.8, except that it is for 8 dimensional codes. Table 10.17 listsuptorate4/5codes. Theratecanagainbeinferredfromthenumberofnonzerotermshiinthetable.6Notethat r= 4/8for = E8.204! # $ %& ' ( ) * + , -. / 0 1 + * - ,/ . 1 0 # ! % $' & ) ( $ % ! #( ) & ' , - * +0 1 . / % $ # !) ( ' & - $ + *1 0 / . & ' ( ) ! # $ % . / 0 1 * + , - ' & ) ( # ! % $ / . 1 0 + * - , 0 1 . / , - * + ( ) & ' $ % ! # ) ( ' &% $ # ! 1 0 / .- , + * Figure10.37: TrellisforWeis16-state,rate3/4,8Dcode. t2h4h3h2h1h0d2minf(dB)Ne fNDZ4R4D48 02 04 11 4 23/24.52 22 3.82 22D42D416 10 04 02 21 6 3 4.77 88 3.88 76Z4R4D416 14 02 21 4 23/24.52 6 4.20 36Z42Z432 30 14 02 41 4 23/24.52 2 4.52 122D42D464 050 014 002 121 6 3 4.77 8 4.37 256Z42D464 050 030 014 002 101 5525.48 36 4.65 524Z42D4128 120 050 022 006 203 6626.28 364 4.77 1020Table10.16: Four-DimensionalTrellisCodesandParameters205 t2h4h3h2h1h0d2minf(dB)Ne fNDE8R8E88 10 04 02 01 8 27/45.27 382 3.75 45E8R8E816 10 04 02 21 8 27/45.27 158 4.01 60Z8E816 10 04 02 21 4 27/45.27 158 4.01 52E82E816 r=4/8 -?- -?- -?- -?- 16 4 6.02 510 4.42 342E8R8E832 30 14 02 61 8 27/45.27 62 4.28 90Z8E832 10 04 02 41 4 27/45.27 62 4.28 82E8R8E864 050 014 002 121 8 27/45.27 30 4.49 150Z8E864 050 014 002 121 4 27/45.27 30 4.49 142Z8R8D8128 120 044 014 002 101 8 27/45.27 14 4.71 516E82E8256 r=4/8 -?- -?- -?- -?- 16 4 6.02 30 5.24 1272Table10.17: Eight-DimensionalTrellisCodesandParametersTable10.17speciesandtinthe8Dcase.N1andN2arealsonotshownforthesecodes,althoughtheauthorsuspectsthatN1coulddominatefortheZ8/R8D8128-statecode.20610.6 TheoryoftheCosetCodeImplementationThissectioninvestigatesthesimplicationoftheimplementationofboththeencoderandthedecoderforamultidimensionaltrelliscode.10.6.1 EncoderSimplicationThegeneralcoset-codeencoderdiagramisusefulmainlyforillustrativepurposes. Actualimplementa-tions using such a diagram as a guideline would require excessive memory for the implementation of thecosetselectandsignalselectfunctions. Inpractice,theuseoftwo-dimensionalconstituentsubsymbolsin Z2is possible, even with codes that make use of more dense multidimensional lattices. All the latticesdiscussedinthisChapterandusedareknownasbinarylattices,whichmeansthattheycontain2ZNasasublattice. Withsuchbinarylattices, additional partitioningcanenlargethenumberofcosetssothat2ZNanditscosetspartitiontheoriginallatticeZNuponwhichtheconstellationwasbased. Thenumberof additional binarypartitionsneededtogetfromtto2ZNisknownastheinformativityof t, k(t). Thisquantityisoftennormalizedtothenumberof dimensionstogetthenormalizedinformativityofthelattice, () = k(t)/N.Theadditionof theextrapartitioningbitstothecosetcodesconvolutional encoderandcosetse-lectioncreatesarate[k +k(t)] / [k +k(t) +rG]convolutionalcodethatselectscosetsof2ZN. Thesecosets can then be trivially separated into two-dimensional cosets of 2Z2(by undoing the concatenation).These 2D cosets can then can be independently specied by separate (reduced complexity and memory)two-dimensional signal selects. Weis 16-state 4D code and 64-state 8D codes will illustrate this conceptformultidimensionalcodes.4DEncoderwithrate2/3,andZ4/R4D4An example is the encoder for a four-dimensional trelliscode that transmits 10 bits per 4D symbol, or b = 10. The redundant extra bit from G (rG= 1) requiresa signal constellation with 211points in 4 dimensions. The encoder uses an uncoded constellation that isthe Cartesian product of two 32CR constellations. 32CR is a subset of Z2, and the uncoded constellationwould be a subset of Z4. A desirable implementation keeps the redundancy (extra levels) in the two 2Dconstituentsub-symbolsassmallaspracticallypossible. Theextensionof32CRtowhatiscalled48CRappearsinFigure10.38. 16newpointsareaddedatthelowest2Denergypositionsoutsidethe32CRtoform48CR. The32CRpointsaredenotedasinnerpointsandthe16newpointsaredenotedasouterpoints. TheCartesianproductofthetwo48CR(asubsetofZ4)withdeletionofthosepointsthatcorrespondto(outer,outer)formthecodedconstellation. Thisleaves210(inner,inner)points,29(inner, outer)points, and29(outer, inner)pointsforatotal of 210+ 229=210+ 210=211=2048points. Theprobabilityofaninnerpointoccuringis3/4,whiletheprobabilityofanouteris1/4. Thetwo-dimensionalsymbolenergyisthuscx(two dimensional) =34E32CR +14Eouter= .75(5) +.25(212.5 + 12.5 + 14.54) = 7 , (10.243)sothatcx= 7/2. Theshapinggainfor48CRisthens=22(1/4) (251)12(7/2)=43.8442= 1.0438 = .19 dB . (10.244)Theshapinggainof 32CRhasalreadybeencomputedas.14dB, thusthenetgaininshapingisonlyabout .05dB - however, a nominal deployment of a two-dimensional code would have required 64 points,leadingtomoreconstellationexpansion,whichmightbeundesirable. SincethecodeGisrate2/3,the3output bits of Gareusedtoselect oneof the8four-dimensional cosets, C4,0, ..., C4,7, andthe8remainingbitswouldspecifywhichoftheparalleltransitionsintheselectedC4,iwouldbetransmitted,requiringalook-uptablewith256locationsforeachcoset. Sincethereare8suchcosets, thedecoderwouldnominallyneed8 256 = 2048locationsinalook-uptabletoimplementthemappingfrombitstotransmittedsignal. Eachlocationinthetablewouldcontain4dimensionsof asymbol. Thelargememoryrequirement canbemitigatedtoalargeextent byusingthestructureillustratedinFigure207!! # # ## # # ! $ %& ' ()& **+%,-& *). -/)+%,-& *). 0 +1 )& **+%,-& *). 01 23 Figure10.38: 48Crossconstellation.! ##$%&' ()$% * $+ )&%, - .) / $* $0 ) v3 = v3+v21v2 =v1 +v0

v1 = v31v2=v1

u1

u2 3 4, #5%6* 7- , 08 v3

v2

v1

v0

v1

v0 9: / , - #5* / $* $0) ; ? @ ? 9 @A)&9#B A6 =47* 3 9 : C 7A $)8

v2

v3

u3 3 07#D 7* (), 7#5*$#07B$%8 3 % E F&;07#D7* (), 7#5*$#0 7B$% 8 ;< 9 =(> ? @ ? 9 @A)&9#B A6 =47*

u4

u5

u6

ss1

ss0

ss2

ss3 3 9: , #&7()8 ;

u7

u8

u9

u10 G;* 70 5), 7#A &; H(A $B ;: A 6 = 47* 0 7#0 5)$#5), 7#x m3 H* 70 5), 7#A 8 H D( ) = D3+ D2D D4+1!"#$%&1!2!Figure10.39: 4Dcoset-selectimplementation.208InputBits 1st2Dsubsym. 2nd2Dsubsym.u4u5u6ss1ss0position ss3ss2position0 0 0 0 0 left-inner 0 0 left-inner0 0 1 0 0 left-inner 0 1 right-inner0 1 0 0 0 left-inner 1 0 outer0 1 1 0 1 right-inner 0 0 left-inner1 0 0 0 1 right-inner 0 1 right-inner1 0 1 0 1 right-inner 1 0 outer1 1 0 1 0 outer 0 0 left-inner1 1 1 1 0 outer 0 1 right-innerTable10.18: TruthTablefor4Dinner/outerselection(evenb)10.39. Becauseofthepartitioningstructure, each4DcosetofC4,0canbewrittenastheunionoftwoCartesianproducts,forinstanceC04=_C02 C02___C22 C22_, (10.245)as inSection10.5. Bit v3species whichof thesetwoCartesianproducts_C02 C02_or_C22 C22_contains the selected signal. These two Cartesian products are now in the desired form of cosets in 2Z4.Thus, k(R4D4) = 1. The four bits v0,..., v3can then be input into binary linear logic to form v0,..., v3.These four output bits then specify which 2D coset Ci2, i = 0, 1, 2, 3 is used on each of the constituent 2Dsubsymbolsthatareconcatenatedtoformthe4Dcodesymbol. Aclockofspeed2/Tisusedtocontrola4:2multiplexerthatchoosesbits v0and v1fortherst2Dsubsymbolandthebits v2and v3forthesecond2Dsubsymbolwithineachsymbolperiod. Onecanverifythattherelations v3= v2 +v3(10.246) v2= v0 +v1(10.247) v1= v3(10.248) v0= v1(10.249)will ensure that 4D coset, Ci4, with i specied by [v2, v1, v0] is correctly translated into the two constituent2Dcosetsthatcomprisethatparticular4Dcoset.Theremaininginputbits(u4, ..., u10)thenspecifypointsineachof thetwosubsymbols. Thein-ner/outer/left/right select described in the Table 10.18 takes advantage of the further separation of each2Dcosetinto3equal-sizedgroups,left-inner,right-inner,andouterasillustratedinFigure10.39. Notetheleftinnerandrightinnersetsarechosentoensureequalnumbersofpointsfromeachofthe4two-dimensional cosets. The outputs of this (nonlinear) bit map are then also separated into constituent 2Dsubsymbolsbyamultiplexerandtheremaining4bits(u7, u8, u9, u10)arealsososeparated. Thenalencoderrequiresonly8locationsfortheinner/outerselectionand64locations(reallyonly48areusedbecausesomeof thecombinationsforthess(signal select)bitsdonotoccur)forthespecicationofeachconstituentsubsymbol. Thisisatotalofonly72locations,signicantlylessthan2048,andonly2dimensions of a symbol are stored in each location. The size of the 64-location 2D Signal Select is furtherreducedbyWeisobservationthatthe4two-dimensionalcosetscanbegeneratedbytakinganyoneofthecosets, andperformingsignpermutationsonthetwodimensionswithinthesubsymbol. Further,bystoringanappropriatevalue(withsigns)inalook-table, thenecessarypointcanbegeneratedbypermutingsignsaccordingtothetwobitsbeingsuppliedforeach2DsubsymbolfromtheCSfunction.Thenthe2Dsignalselectionwouldrequireonly16locations,insteadof64. Then,thememoryrequire-mentwouldbe(withalittleextralogictodothesignchanges)16+8=24locations. Wenoteb 6andbmustbeevenforthisencodertowork(whileforb6, onlythe2DSignal Selectchanges, andthememorysize(usingWeissignmethodtoreducebyafactorof4)is4 2b62. Foroddb,b +1iseven,theconstellationissquare,andtheinner/outer/left/rightpartitioningoftheconstituent2Dsubsymbolsisunnecessary.209!1. . . . . .!2right-upper inner pointsouter pointsright-lower inner pointsleft-lower inner pointsleft-upper inner points0 32 1Figure10.40: 40Crossconstellation.8DEncoderwithrate3/4, andZ8/E8Anencoderforb=2.5anda64-stateeight-dimensionaltrellis code transmits 20 bits per 8D symbol, or b = 20. The redundant extra bit from G (rG= 1) requiresa signal constellation with 221points in 8 dimensions. The uncoded constellation is the Cartesian productof four 32CR constellations. Note that 32CR is a subset of Z2, and our uncoded constellation would be asubset of Z8. A desirable implementation keeps the redundancy (extra levels) in the two 2D constituentsub-symbols assmall as ispracticallypossible. 32CRextendstowhat is called40CRandshowninFigure 10.40. 8 new points are added at the lowest 2D energy positions outside the 32CR to form 40CR.The 32CR points are denoted as inner points and the 8 new points are denoted as outer points. TheCartesianproductofthefour40CR(asubsetofZ8)withdeletionofthosepointsthathavemorethanoneouterpointformsthe8-dimensionaltransmittedsignalconstellation. Thisleaves220(in,in,in,in)points, 218(in, in, in, out)points, 218(in, in, out, in)points, 218(in, out, in, in)points, and218(out,in, in, in)pointsforatotal of 220+ 4218=220+ 220=221=2, 097, 152points. Theprobabilityofaninnerpointoccuringis7/8,whiletheprobabilityofanouteris1/8. Thetwo-dimensionalsymbolenergyisthuscx=78E32CR +18Eouter= .875(5) +.125(12.5) = 5.9375 , (10.250)sothatcx= 5.9375/2. Theshapinggainfor40CRisthens=22(1/8) (251)12(5.9375/2)=36.865435.625= .15 dB . (10.251)Theshapinggainof 32CRhasalreadybeencomputedas.14dB, thusthenetgaininshapingisonlyabout .01dB - however, a nominal deployment of a two-dimensional code would have required 64 points,leadingtomoreconstellationexpansion, whichmaybeundesirable. SincethecodeGisrate3/4, the4outputbitsof Gareusedtoselectoneof the16eight-dimensional cosets, C08, ..., CF8, andthe17remainingbitswouldspecifywhichoftheparalleltransitionsintheselectedCi8wouldbetransmitted,requiringalook-uptablewith217= 131, 0728-dimensionallocationsforeachcoset. Sincethereare16210( r= 3 /4c onv ol uLl onalenc oder)

v 3 =v 3 +v 3,v 4 =v 2 +v 4v 3 =v 0 +v 1,v 2 =v 3v 1 =v 4,v 0 =v 1 +v 4( bl naryl ogl c-l l near) ( r = 3/6c onvol uLl ona lenc oder )

u 1

u 2

u 3

u 4

u 3

v 0

v 1

v 2

v 3

v 4

v 36: 3 mux } 1 } 2 1 sL/2 nd 4 u s y mbol 2 : 1 mux

u 6

u 7

c 3 =c 2 +c 3,c 2 =c 0 +c 1c 1 =c 3,c 0 =c 1( bl naryl ogl c-l l near)

v 0

v 1

v 2

v 3

v 4

v 3

c 0

c 1

c 2

c 31 s L/2 nd/3rd/4 Lh 2 u s y mbol 1[ 2[ } 1 } 2 } 3 } 4 4: 2 mux 12: 3 mux l nner/ouLer l ef L/rl g hL upper/l ow er s el ec L ( 3 12l oc aLl ons )

u 8

u 9

u 10

u 11

u 12

u 13

u 14

u 13

u 16

s s 0

ss 1

s s 2

s s 3

s s 4

s s 3

s s 6

s s 7

s s 8

s s 9

s s 10

s s 112 u S l g nalS el ec L 4: 1mux

u 17

u 18

u 19

u 201 s L/2nd/3 rd/4 Lh 2 u s y mbol ( 64l ocaLl ons,40areused) 8u S l gnalC oncaLenaLl on

! m

c 3 c 2

c 1 c 0H D( ) = D5+ D3D3+ D2D D6+ D4+1!"#$%&Figure10.41: 8DCosetSelectImplementationss2ss1ss0position0 0 0 left-top-inner0 0 1 left-bottom-inner0 1 0 right-top-inner0 1 1 right-bottom-inner1 0 0 outerTable10.19: TruthTableforinner/outerselection(evenb)suchcosets, theencoderwouldnominallyneed2,097,1528-dimensional locationsinalook-uptabletoimplement the mapping from bits to transmitted signal. The large memory requirement can be mitigatedtoalargeextentbyusingthestructureillustratedinFigure10.41. Fromthepartitioningstructure,each8DcosetofC08canbewrittenastheunionoffourCartesianproducts,forinstanceC08=_C04 C04___C24 C24___C44 C44___C64 C64_, (10.252)andeachoftheseCartesianproductsisaR4D4sublatticecoset. Bitsu4andu5specifywhichofthesefourCartesianproductscontainstheselectedsignal. Theoriginal convolutional encoderwasrate3/4,anditnowbecomesrate5/6. Thesixbitsv0... v5canthenbeinputintoalinearcircuitwiththerelationsillustratedinFigure10.41thatoutputssixnewbitsintwogroupsof threebitseach. Therstsuchgroupselectsoneoftheeight4DcosetsC04... C74fortherst4Dconstituentsub-symbolandthesecondgroupdoesexactlythesamethingforthesecondconstituent4Dsubsymbol. Theencoderusestwomorebitsu6andu7asinputstoexactlythesametypeoflinearcircuitthatwasusedforthepreviousfour-dimensionalcodeexample. Wecouldhaveredrawnthisguretoincludeu6andu7intheconvolutional encoder, sothattheoverall convolutional encoderwouldhavebeenrate7/8. Notethatk(E8)=4(u4, u5, u6, andu7). Theninebitsu8... u16thenareusedtospecifyinner/outerselectionabbreviatedbyTable10.19. The9outputbitsofthisinner/outerselectionfollowidenticalpatternsfortheother2Dconstituentsubsymbols. Thereare512combinations(44+ 443), prohibitingmorethan211oneouterfromoccuringinany8Dsymbol.Thenal encoderrequiresonly512locationsfortheinner/outer/left/right/upper/lowercircuit,and64locations(only40actuallyused)forthespecicationofeachconstituentsubsymbol. Thisisatotalofonly552locations,signicantlylessthan2,097,152,andeachisonlytwo-dimensional. Furtherreductionofthesizeofthe64-locationmemoryoccursbynotingthatthe4two-dimensionalcosetscanbegeneratedbytakinganyoneofthecosets,andperformingsignpermutationsonthetwodimensionswithinthesubsymbol. Then,thememoryrequirementwouldbe(withalittleextralogictodothesignchanges)528locations.10.6.2 DecoderComplexityThe straightforward implementation of this maximum likelihood sequence detector for 4D and 8D codescanbeverycomplex. Thisisbecauseofthe(usually)largenumberofparalleltransitionsbetweenanytwo pairs of states. In one or two dimensions with (possibly scaled) Zor Z2lattices, the closest point toareceivedsignalisfoundbysimpletruncation. However,thedeterminationoftheclosestpointwithinasetofparalleltransitionsthatfallonadense4Dor8Dlatticecanbemoredicult. Thissubsectionstudies such decoding for both the D4and E8lattices. A special form of the Viterbi Algorithm can alsoreadily be used in resolving the closest point within a coset, just as another form of the Viterbi Algorithmisusefulindecidingwhichsequenceofmultidimensionalcosetsisclosesttothereceivedsequence.DecodingtheD4Lattice Bydenition:D4= R4Z4__R4Z4+ [0, 1, 0, 1]_(10.253)=_R2Z2R2Z2___(R2Z2+ [0, 1]) (R2Z2+ [0, 1])_, (10.254)whichcanbe illustratedbythe trellis inFigure 10.25. Either of the twopaths throughthe trellisdescribes asequenceof two2Dsubsymbols whichcanbeconcatenatedtoproduceavalid4Dsym-bol inD4. Similarly, uponreceipt of a 4Dchannel output, the decoder determines whichof thepaths throughthe trellis inFigure 10.25 was closest. This point decoder canuse the Viterbi Al-gorithmtoperformthis decodingfunction. Insodoing, thefollowingcomputations areperformed:trellisposition subsymbol(1or2) adds(orcompares)R2Z21 1addR2Z22 1addR2Z2+ [0, 1] 1 1addR2Z2+ [0, 1] 2 1addmiddlestates 2addsnalstate 1compareTotal 1&2 7opsThe R4D4Lattice has essentially the same lattice as shown in Figure 10.42. The complexity for decodingR4D4isalsothesameasintheD4lattice(7operations).Tochooseapointineachof the8cosetsof t=R4D4inastraightforwardmanner, theViterbidecodingrepeats8timesforeachof the8cosetsof R4D4. Thisrequires56operationsforthecosetdetermination alone. However,32 operations are sucient: First,the two cosets of D4partition Z4intotwoparts, asillustratedinFigure10.43. The2dimensional cosetsB2,0andB2,1forboththerst2Dsubsymbol and the second 2D subsymbol are common to both trellises. Once the decoder has selected theclosest point in either of these two cosets (for both rst subsymbol and again for the second subsymbol),itneednotrepeatthatcomputationfortheothertrellis. Thus, thedecoderneeds7computationstodecodeoneof thecosets, butonly3additional computations(2addsand1compare)toalsodecodethesecondcoset, leavingatotal of 10computations(whichislessthanthe14thatwouldhavebeenrequiredhadtheredundancyinthetrellisdescriptionsforthetwocosetsnotbeenexploited).ReturningtoR4D4,the8trellisesdescribingthe8cosets,Ci4,i = 0, ..., 7,usedina4D(Z4/R4D4)trelliscodearecomprisedof 42Dcosets(C02, C12, C22andC32) forboththerst 2Dsubsymbol andfor the second2Dsubsymbol. The decodingof these cosets (for bothsubsymbols) thus requires 8additions. Then the decoding performs 3 computations (2 adds and 1 compare) for each of the 8 trellises2122Z22Z22Z2+ 1 1!"#$%&2Z2+ 1 1!"#$%&Figure10.42: TrellisfortheR4D4Lattice

! # $

! # %

! & %

! & %

! & %

! & %

! & $

! & $

! & $

! & $ Figure10.43: TrellisforthetwocosetsoftheD4lattice213correspondingtothe8cosets, requiringanadditional 24computations. Thus, thetotal computationrequired to decode all 8 cosets is 32 computations. Normalized to 1 dimension, there are 8 computationsforall8cosetsoftheD4lattice.Wecannowreturntothecomplexityofdecodingtheentirecosetcode. ThecomplexityofdecodingusingtheViterbidetectorforthetrelliscode,afterparalleltransitionshavebeenresolvedisND(trellis) = 2_2kadds + 2k1 compares_. (10.255)TheoverallcomplexityND(C)isthesumofthecomplexityofdecodingthe2k+rGcosetsoftplusthecomplexityofdecodingthesequencesofmultidimensionalcodewordsproducedbythetrelliscode.EXAMPLE10.6.1(Weis16-state4Dcode,withr = 2/3) This code is based on thepartition Z4/R4D4 and thus requires 32 operations to decode the 8 sets of parallel transitions.Also as k = 2 and 2= 16, there are 16(4+3) = 112 operations for the remaining 4D sequencedetection. Thetotal complexityisthus112 + 32=144operationsper4Dsymbol. ThusND= 144/4 = 36,whichwasthecorrespondingentryinTable10.16earlier.TherestoftheNDentriesinthe4Dtablearecomputedinasimilarfashion.10.6.3 DecodingtheGossett(E8)LatticeThedecodingofanE8latticeissimilartothedecodingoftheD4lattice,exceptthatitrequiresmorecomputation. Itisadvantageoustodescribethedecodingofthe16cosetsofE8inZ8intermsoftheeasilydecoded2Dconstituentsymbols.TheE8latticehasdecomposition:E8= R8D8_(R8D8 + [0, 1, 0, 1, 0, 1, 0, 1]) (10.256)= (R4D4)2_(R4D4 + [0, 0, 1, 1])2(10.257)_(R4D4 + [0, 1, 0, 1])2_(R4D4 + [0, 1, 1, 0])2, (10.258)whichusestheresultR8D8= (R4D4)2_(R4D4 + [0, 0, 1, 1])2. (10.259)ThetrellisdecompositionoftheE8intermsof4Dsubsymbolsasdescribedby(10.258)isillustratedinFigure10.27. Therecognitionthatthe4DsubsymbolsinFigure10.27canbefurtherdecomposedasinFigure 10.25 leads to Figure 10.28. The coset leader [1, 1] is equivalent to [1, 1] for the C-level cosetsintwodimensions,whichhasbeenusedinFigures10.27and10.28.Thecomplexityof decodingtheE8latticerequires anadditionfor eachof the4cosets of R2D2for eachof the 42Dsubsymbols, leadingto16additions. Thenfor eachof the (used) 4cosets ofR4D4,decodingrequiresanadditional(2addsand1compare)leadingto12computationsforeach4Dsubsymbol,or 24 total. Finally the 4D trellis in Figure 10.27 requires 4 adds and 3 compares. The totalisthen16+24+7=47operations.All 16 cosets of the E8 lattice require 16 additions for the 4 2D C-level partitions, and 48 computationsforbothsets(rst4Dplussecond4D)of8cosetsofR4D4thatareused,andnally16(7)=112compu-tations for decoding all sixteen versions of Figure 10.27. This gives a total of ND= 16 +48 +112 = 176,orND= 22.EXAMPLE10.6.2(Weis64-state8Dcode,withr = 3/4) This code is based on thepartitionZ8/E8andrequires 176 operations to decode the 16 cosets of E8. Also withk=3and2=64, thereare64(8 + 7)=960operationsfortheremaining8Dsequencedetection. Thetotal complexityisthus960 + 176=1136operationsper8Dsymbol. ThusND= 1136/8 = 142,whichwasthecorrespondingentryinTable10.17earlier.10.6.4 LatticeDecodingTableTable10.20isalistofthedecodingcomplexitytondallthecosetsoftin.214 t[/t[ NDNDZ2D22 2 1Z22Z24 4 2Z22D28 8 4Z24Z216 16 8Z4D42 10 2.5Z4R4D48 32 8Z42D432 112 28Z42R4D4128 416 104D4R4D44 20 5D42D416 64 16D42R4D464 224 56Z8D82 26 3.25Z8E816 176 22Z8R8E8256 2016 257Z82E84096 29504 3688D8E88 120 15D8R8E8128 1120 140D82E82048 15168 1896E8R8E816 240 30E82E8256 2240 280Table10.20: LatticeDecodingComplexityforallCosetsofSelectedPartitions21510.7 ShapingCodesShaping codes for the AWGN improve shaping gain up to a maximum of 1.53 dB (see also Problems 10.17and 10.18). Following a review of this limit, this section will proceed to the 3 most popular shaping-codemethods.Theshapinggainofacodeisagains=V2/NxL xV2/NxLx. (10.260)PresumingtheusualZN-latticecubicreference,bestshapinggainoccursforaspatiallyuniformdistri-bution of constellation points within a hypersheric volume7. For an even number of dimensions N= 2n,anN-dimensionalsphereofradiusrisknowntohavevolumeV (sphere) =n r2nn!, (10.261)andenergyc(sphere) =12_r2n + 1_. (10.262)TheasymptoticequipartitionanalysisofChapter8suggeststhatif thenumberofdimensionsgoestoinnity, so that n , then there is no better situation for overall coding gain and thus for shaping thanttheuniformdistributionof equali-sizeregionsthroughoutthevolume. FortheAWGN, themarginalone-(ortwo-)dimensionalinputdistributionsareallGaussianatcapacityandcorrespondtoauniformdistributionover aninnitenumber of dimensions.8Sincethecodinggainis clearlyindependent ofregion shape, then the shaping gain must be maximum when the overall gain is maximum, which is thustheuniform-over-hypershere/Gaussiancase. Thus, theasymptoticsituationprovidesanupperboundonshapinggain. Theuncodedreferenceusedthroughoutthistextisagaincx=112_22b1_(10.263)V2/Nx= 22b 1 . (10.264)Then,algebraicsubsitutionof(10.261)-(10.264)into(10.260)leadstotheshaping-gainbounds r2(n!)1n/r22(n+1)12_1 22b_ =6(n!)1n(n + 1)1 22b. (10.265)Table10.21evaluatestheformulain(10.265)forb whileTable10.22evaulatesthisformulaforn . With b innite, and then taking limits as n for the asymmptotic best case using Stirlingsapproximation(seeProblems10.17and10.18),limns limn6 n + 1(n!)1n=6limnn + 1ne=e6= 1.53 dB . (10.266)Equation(10.265)providesashaping-gainboundforanynitebandn, andcorrespondstouniformdensityofthe2bpointswitha2n-dimensional sphere. Equation(10.266)istheoverall asymptoticallyattainableboundonshapinggain.7Inspectionof(10.260)snumeratorobservesthatforanygivenradius,whichisdirectlyproportionalto thesquarerootofenergy_E x,aspherehaslargestvolumeandthuslargestnumberofpointsiftheycanbeuniformlysituatedspatially.8AsimpleGaussianproofsketch: Apointcomponentinanydimensioncanbeobtainedbylinearprojection,whichhasaninnitenumberofterms,thussatisfyingthecentrallimittheoremininnitedimensions. Thusallthecomponents(ormarginal)distributionsareGaussian. QED.216b s(n ) 1.53dB4 1.52dB3 1.46dB2 1.25dB1 0.28dB0 0dBTable10.21: Shapinggainlimitsforaninnitenumberofdimensions.n(N) s(b )1(2) 0.20dB2(4) 0.46dB3(6) 0.62dB4(8) 0.73dB12(24) 1.10dB16(32) 1.17dB24(48) 1.26dB32(64) 1.31dB64(128) 1.40dB100(200) 1.44dB500(64) 1.50dB1000(2000) 1.52dB10000(20000) 1.53dBTable10.22: Shapinggainlimitsforaninnitenumberofconstellationpoints.217i=1i=2i=3i=4i=5Figure10.44: ShellConstructionIllustrationfor32SH(=32CR).10.7.1 Non-EquiprobableSignalingandShellCodesShell constellations were introducedbyformer EE379student Paul Fortier inhis PhDdissertation.They are conceptually straightforwardandpartitionany constellationinto shells (or rings). Eachshellis agroupof constellationpoints containedonashell (or circle in2D) as inFigure 10.44.Shell(SH)constellationshaveallpointswithinagroup(orshell)atconstantenergy. Theseshellgroupsareindexedbyi =1, ..., G, andeachhasenergy ciandcontainsMipoints. Theprobabilitythatapointfromgroupiisselectedbyanencoderispi. Fortheexampleof Figure10.44, thereare5shellgroupswithtwo-dimensional energies(ford=2) c1=2, c2=10, c3=18, c4=26, c5=34. EachcontainsM1= 4,M2= 8,M3= 4,M4= 8,andM5= 8pointsrespectively. Theshellshaverespectiveprobabilitiesp1=1/8, p2=1/4, p3=1/8, p4=1/4andp5=1/8ifeachpointintheconstellationisequallylikelytooccur. Theaverageenergyofthisconstellationcanbeeasilycomputedascx=G

i=1pi ci=_28+104+188+264+344_ = 20 , (10.267)withconsequentshapinggains=(1/6)31d220=3130= 0.14 dB . (10.268)Therectangular-latticeconstellation(acosetof 2Z2)in2Dthatusestheleastenergyfor32pointsis32CR=32SH,becauseitcorrespondstousingtheshellswiththeleastsuccessiveenergiesrst.218128CRdoesnothavesuchaleast-energyproperty. A128SHconstellationismorecomplicated. The128-point shell constellation 128SH constellation uses 17 shells of energies (number of points) (for d = 2)of2(4), 10(8), 18(4), 26(8), 34(8), 50(12), 58(8), 74(8), 82(8), 90(8), 98(4), 106(8), 123(8),130(16),146(8),162(4),and170(4of16possible). The128SHusesthepoints(9, 9)thatdonotappearin128CR.Theshapinggainof128SH is.17dB andis slightlyhigherthanthe.14dB of128CR.Problem10.26discussesa64SHconstellation(whichisclearlynotequalto64SQ).Thenumberofbitsperdimensioncanapproachtheentropyperdimensionwithwell-designedinputbuering (and possibly long delay) as in Chapter 8. While a constellation may have a uniform density overNdimensions, themarginal distributionforone-ortwo-dimensional constituentsubsymbolsneednotbeuniform. Sometimesthisisusedinshapingtoeectlarge-dimensionalswithunequalprobabilitiesin a smaller number of dimensions. The entropy per symbol of any constellation with point probabilitiespijforthejthpointintheithgroupupperboundsthebitsperone-or-two-dimensional, oranynite-dimensional,symbolb H=G

i=1..shellsMi

j=1..ptsinshellpij log2_1pij_. (10.269)Equation(10.269)holdsevenwhenthegroupsarenotshells. Whenall pointsareequallylikely, bissimplycomputedintheusualfashionasb = H=G

i=1Mi

j=11Mlog2 (M) = log2(M) = log2_

iMi_. (10.270)Wheneachpoint withinagrouphasequal probabilityof occuring(but thegroupprobabilityisnotnecessarilythesameforeachgroupasinthe32CRand128SHexamplesabove),pij=piMi,thesameHcanbewrittenH=G

i=1piMiMi

j=1log2(Mipi) =G

i=1pi log2(Mipi) , (10.271)whichifbi= log2(Mi),thenbecomesH=G

i=1pi log2_ 1pi_+G

i=1pi bi= H(p) +G

i=1pi bi, (10.272)whereH(p)istheentropyofthegroupprobabilitydistribution. goodshapingcodedesignwouldcauseb=H, asisevidentlater. Equation(10.272)supportstheintuitionthattheoverall datarateisthesumofthedataratesforeachgroupplustheentropyofthegroupprobabilitydistribution.The one-dimensional slices of the constellation should favor points with smaller energy; that is smallerenergies haveahigher probabilityof occurrence. For instancein32CR, aone-dimensional slicehasp1=1232, p3=1232, but p5=832. As thedimensionalityN , theenumerationof shells canbeverytedious, butotherwisefollowsthetwo-dimensional examplesabove. FurthermoreasN the one-dimensional distribution of amplitudes (for large numbers of points) approaches Gaussian9withverylargeamplitudesinanydimensionhavinglowprobabilityof occurrence. Suchlowprobabilityofoccurrence reduces energy of that dimension, thus increases shaping gain. The non-uniform distributioninasingle(ortwo)dimensionssuggeststhatasuccessionof two-dimensional constellationsviewedasashaping-codecodewordmighthaveextrapoints(thatismorethan22b, asalsointrelliscodes)withpointsinoutershells(orgroups)havinglowerprobabilitythananequallylikely22binonedimension.For instance, a 32CR constellation might be used with b = 2 < 2.5, but with points in the outer shells atmuch lower probability of occurence than points in the inner shell when viewed over a succession of N/22D-constellationsasonelargeshapingcodeword. Then,thetwo-dimensionalMinequation(10.272)9Theresultif viewedintermsof 2Dconstellationsisthatthe2DcomplexamplitudesapproachacomplexGaussiandistribution.219i=1i=2i=3i=4i=5i=6i=7Figure10.45: 48SHConstellation(withextra4emptypointsfor52SH).220is greater than 22b, but Equation (10.272) still holds with b directly computed even though pij ,=1M, butpij= pi

1Mi. Thenumberofpointsintwodimensionsisnotthe_N2_throotofthetrueN-dimensionalM. Indeed,b = H(p) +G

i=1pi log2(Mi) (10.273)isagoodequationtousesinceitavoidstheconfusiononMapplyingtoalargernumberofdimensionsthan2.Such was the observation of Calderbank and Ozarow in their Nonequiprobable Signalling (NES).In NES, A 2D constellation is expanded and viewed as a part of a larger multi-dimensional constellation.INESchoosesoutergroups/shellslessfrequentlythaninnergroups(orlessfrequentlythanevenMi22b).Asimpleexampleof nNESwas the48CRconstellationusedfor 4Dconstellations earlier inthischapter. Inthatconstellation, the32innertwo-dimensional pointshadprobabilityof occurence3/4whilethe16outerpointshadprobabilityonly1/4. Whilethisconstellationcorrespondsalsotoa2Dshell constellation (see Figure 10.45), the probabilities of the various groups are not simply computed bydividingthenumberofpointsineachgroupby48. Furthermore,theconstellationisnota4Dshell,butdoes have a higher shaping gain than even the two-dimensional 128SH. In one dimension, the probabilitiesofthevariouslevelsarep1=1232 34+416 14=1132(10.274)p3=1232 34+216 14=1032(10.275)p5=832 34+416 14=832(10.276)p7= 0 +616 14=332. (10.277)Equations(10.274-(10.276)illustratethatlargepointsinonedimensionhavelowprobabilityofoccur-rence. This48CRconstellationcanbeusedtotransmit5.5bitsintwodimensions, emphasizingthat25.5< 48sothatthepointsarenotequallylikely. Anastutereadermightnotethat4ofthepointsinthe outer most 2D shell in Figure 10.45 could have been included with no increase in energy to carry anadditional14 4161 =116bitintwodimensions. Theenergyof48CR(andofthusof52CR)iscx=3420 +14_5012 + 58416_ = 28 , (10.278)leadingtoashapinggainofforb = 5.5ofs=(1/6)25.5 428= .23 dB , (10.279)and for 52CRs b = 5.625 of .42 dB. The higher shaping gain of the 52SH simply suggests that sometimesbest spherical approximation occurs with a number of points that essentially covers all the points interiortoahypersphereandisnotusuallyanicepowerof2.The48CR/52CRexampleis not ashell constellationin4D, but does providegoodshapinggainrelative to 2D because of the dierent probabilities (3/4 and 1/4) of inner and outer points. Essentially,48CR/52CRisa4Dshapingmethod,butuses2Dconstellationswithunequalprobabilities. Thefour-dimensional probability of any of the 4D 48SH/52SH points is still uniform and is 24b. That is, only the2Dcomponentconstellationshavenon-uniformprobabilities. Again, forlargenumberof constellationpointsandinnitedimensionality, theuniformdistributionoveralargenumberofdimensionswithanaverageenergyconstraintleadstoGaussianmarginaldistributionsineachofthedimensions.Forthe48CR/52CRorany4Dinner/outerdesignwithanoddnumberof bitspersymbol, Table10.18 provided a nonlinear binary code with 3 bits in and 4 bits out that essentially maps input bits intotwo partitions of the inner group or the outer group. The left-inner and right-inner partitioning of Table10.18isconvenientforimplementation,buttheessenceofthatnonlinearcodeisthemappingofinputs221into a description of inner group (call it 0) and outer group (call it 1). This general descriptionwouldlikeasfew1s(outers)percodewordaspossible,withthecodewordsbeing(0,0),(1,0),and(0,1)butnever(1,1). Forevennumbersof bitsinfourdimensions, Table10.18wasnotnecessarybecausethere was no attempt to provide shaping gain in Section 10.6, the interest was coding gain and the useofinner/outer constellations there was exclusively to handle the half-bit constellations,coincidentallyprovidingshapinggain. However,theconceptofthe3codewords(0,0),(1,0),(0,1)remainsifthereareextrapointsinany2Dconstellationbeyond22b. Thesetof 2Dconstellationpointswouldbedividedin