40
Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 Lecture 7: MIMO Communica2ons 14/11/24 1

Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Wireless Communications

Lecture  7    MIMO  Communica2ons

Prof. Chun-Hung Liu

Dept. of Electrical and Computer Engineering

National Chiao Tung University

Fall 2014

Lecture  7:  MIMO  Communica2ons 14/11/24 1

Page 2: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

Outline

•  MIMO Communications (Chapter 10 in Goldsmith’s Book) •  Narrowband MIMO Model •  Parallel Decomposition of the MIMO Channel •  MIMO Channel Capacity •  MIMO Diversity Gain: Beamforming •  Diversity-Multiplexing Trade-off •  Space-Time Modulation and Coding

14/11/24 2

Page 3: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

•  A complex random vector is of the form where and are real random vectors.

•  Complex Gaussian random vectors are ones in which is a real Gaussian random vector.

•  The distribution is completely specified by the mean and covariance matrix of the real vector .

•  Define

Circular Complex Gaussian Vectors x = xR + jxI xR xI

[xR, xI ]t

[xR, xI ]t

where is the transpose of the matrix A with each element replaced by its complex conjugate, and is just the transpose of A.

A⇤

•  Note that in general the covariance matrix K of the complex random vector x by itself is not enough to specify the full second-order statistics of x. Indeed, since K is Hermitian, i.e., , the diagonal elements are real and the elements in the lower and upper triangles are complex conjugates of each other.

K = K⇤

14/11/24 3

At

Page 4: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

Circular Complex Gaussian Vectors •  In wireless communication, we are almost exclusively interested in

complex random vectors that have the circular symmetry property:

•  For a circular symmetric complex random vector x,

for any ; hence the mean . Moreover ✓ µ = 0

for any ; hence the pseudo-covariance matrix J is also zero. ✓

•  Thus, the covariance matrix K fully specifies the first and second order statistics of a circular symmetric random vector.

14/11/24 4

Page 5: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

Circular Complex Gaussian Vectors •  And if the complex random vector is also Gaussian, K in fact specifies

its entire statistics. •  A circular symmetric Gaussian random vector with covariance matrix

K is denoted as . •  Some special cases:

•  A complex Gaussian random variable with i.i.d. zero-mean Gaussian real and imaginary components is circular symmetric. In fact, a circular symmetric Gaussian random variable must have i.i.d. zero-mean real and imaginary components.

•  A collection of n i.i.d. random variables forms a standard circular symmetric Gaussian random vector w and is denoted by . The density function of w can be explicitly written as

•  Uw has the same distribution as w for any complex orthogonal matrix U (such a matrix is called a unitary matrix and characterized by the property .

w = wR + jwI

CN (0, 1)

CN (0, I)

UU⇤ = I14/11/24 5

Page 6: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

Narrowband MIMO Model •  Here we consider a narrowband MIMO channel. A narrowband point-to-

point communication system of transmit and receive antennas is shown in the following figure.

14/11/24 6

Page 7: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

•  Or simply as

•  We assume a channel bandwidth of B and complex Gaussian noise with zero mean and covariance matrix , where typically .

•  For simplicity, given a transmit power constraint P we will assume an equivalent model with a noise power of unity and transmit power

, where ρ can be interpreted as the average SNR per receive antenna under unity channel gain. •  This power constraint implies that the input symbols satisfy

where is the trace of the input covariance matrix

Narrowband MIMO Model

(10.1)

14/11/24 7

Page 8: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

Parallel Decomposition of the MIMO Channel •  When both the transmitter and receiver have multiple antennas, there is

another mechanism for performance gain called (spatial) multiplexing gain.

•  The multiplexing gain of an MIMO system results from the fact that a MIMO channel can be decomposed into a number R of parallel independent channels.

•  By multiplexing independent data onto these independent channels, we get an R-fold increase in data rate in comparison to a system with just one antenna at the transmitter and receiver.

•  Consider a MIMO channel with channel gain matrix known to both the transmitter and the receiver.

•  Let denote the rank of H. From matrix theory, for any matrix H we can obtain its singular value decomposition (SVD) as

where Σ is an diagonal matrix of singular values of H.

(10.2)

14/11/24 8

Page 9: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

Parallel Decomposition of the MIMO Channel

•  Since cannot exceed the number of columns or rows of H, If H is full rank, which is sometimes referred to as a rich scattering environment, then .

•  The parallel decomposition of the channel is obtained by defining a transformation on the channel input and output x and y through transmit precoding and receiver shaping, as shown in the following figure:

•  The transmit precoding and receiver shaping transform the MIMO channel into RH parallel single-input single-output (SISO) channels with input and output , since from the SVD, we have that

14/11/24 9

Page 10: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

Parallel Decomposition of the MIMO Channel

14/11/24 10

Page 11: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

Parallel Decomposition of the MIMO Channel

•  This parallel decomposition is shown in the following figure:

14/11/24 11

Page 12: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

MIMO Channel Capacity •  The capacity of a MIMO channel is an extension of the mutual

information formula for a SISO channel given in Lecture 3 to a matrix channel.

•  Specifically, the capacity is given in terms of the mutual information between the channel input vector x and output vector y as

•  The definition of entropy yields that H(Y|X) = H(N), the entropy in the noise. Since this noise n has fixed entropy independent of the channel input, maximizing mutual information is equivalent to maximizing the entropy in y.

•  The mutual information of y depends on its covariance matrix, which for the narrowband MIMO model is given by

(10.5)

(10.6)

14/11/24 12

Page 13: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

MIMO Channel Capacity

where is the covariance of the MIMO channel input.

•  The mutual information can be shown as

•  The MIMO capacity is achieved by maximizing the mutual information over all input covariance matrices satisfying the power constraint:

where denotes the determinant of the matrix A.

(10.7)

(10.8)

Rx

•  Now let us consider the case of Channel Known at Transmitter

14/11/24 13

Page 14: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

•  Substituting the matrix SVD of H into C and using properties of unitary matrices we get the MIMO capacity with CSIT and CSIR as

(10.9)

MIMO Channel Capacity

Since , the capacity (10.9) can also be expressed in terms of the power allocation to the ith parallel channel as

⇢ = P/�2n

(10.10)

where and is the SNR associated with the ith channel at full power.

⇢i = Pi/�2n �i = �2

i P/�2n

Pi

•  Solving the optimization leads to a water-filling power allocation for the MIMO channel:

(10.11)

14/11/24 14

Page 15: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

MIMO Channel Capacity •  The resulting capacity is then

(10.12)

•  Now consider the case of Channel Unknown at Transmitter (Uniform Power Allocation)

•  Suppose now that the receiver knows the channel but the transmitter does not. Without channel information, the transmitter cannot optimize its power allocation or input covariance structure across antennas.

•  If the distribution of H follows the ZMSW channel gain model, there is no bias in terms of the mean or covariance of H.

•  Thus, it seems intuitive that the best strategy should be to allocate equal power to each transmit antenna, resulting in an input covariance matrix equal to the scaled identity matrix:

14/11/24 15

Page 16: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

MIMO Channel Capacity •  It is shown that under these assumptions this input covariance matrix

indeed maximizes the mutual information of the channel. •  For an Mt transmit, Mr-receive antenna system, this yields mutual

information given by

•  Using the SVD of H, we can express this as

(10.13)

where and is the number of nonzero singular values of H.

•  The mutual information of the MIMO channel (10.13) depends on the specific realization of the matrix H, in particular its singular values . {�i}

14/11/24 16

Page 17: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

MIMO Channel Capacity •  In capacity with outage the transmitter fixes a transmission rate C, and

the outage probability associated with C is the probability that the transmitted data will not be received correctly or, equivalently, the probability that the channel H has mutual information less than C.

•  This probability is given by

(10.14)

•  Note that for fixed , under the ZMSW model the law of large numbers implies that

Mr

(10.15)

•  Substituting this into (10.13) yields that the mutual information in the asymptotic limit of large becomes a constant equal to Mt

14/11/24 17

Page 18: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

MIMO Channel Capacity •  We can have two important observations from the results in (10.14)

and (10.15) •  As SNR grows large, capacity also grows linearly with

for any and . •  At very low SNRs transmit antennas are not beneficial: Capacity only

scales with the number of receive antennas independent of the number of transmit antennas.

Mt Mr

M = min{Mt,Mr}

•  Fading Channels •  Channel Known at Transmitter: Water-Filling

(10.16)

14/11/24 18

Page 19: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

MIMO Channel Capacity •  A less restrictive constraint is a long-term power constraint, where we

can use different powers for different channel realizations subject to the average power constraint over all channel realizations.

•  The ergodic capacity in this case is

(10.17)

•  Channel Unknown at Transmitter: Ergodic Capacity and Capacity with Outage •  Consider now a time-varying channel with random matrix H

known at the receiver but not the transmitter. The transmitter assumes a ZMSW distribution for H.

•  The two relevant capacity definitions in this case are ergodic capacity and capacity with outage.

•  Ergodic capacity defines the maximum rate, averaged over all channel realizations, that can be transmitted over the channel for a transmission strategy based only on the distribution of H.

14/11/24 19

Page 20: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

MIMO Channel Capacity •  This leads to the transmitter optimization problem - i.e., finding the

optimum input covariance matrix to maximize ergodic capacity subject to the transmit power constraint.

•  Mathematically, the problem is to characterize the optimum to maximize

Rx

(10.18)

where the expectation is with respect to the distribution on the channel matrix H, which for the ZMSW model is i.i.d. zero-mean circularly symmetric unit variance.

•  As in the case of scalar channels, the optimum input covariance matrix that maximizes ergodic capacity for the ZMSW model is the scaled identity matrix . Thus the ergodic capacity is given by:

(10.19)

14/11/24 20

⇢Mt

IMt

Page 21: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

MIMO Channel Capacity •  The ergodic capacity of a 4x4 MIMO system with i.i.d. complex

Gaussian channel gains is shown in Figure 10.4.

14/11/24 21

Page 22: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

MIMO Channel Capacity •  Capacity with outage is defined similar to the definition for static channels

described in previous, although now capacity with outage applies to a slowly-varying channel where the channel matrix H is constant over a relatively long transmission time, then changes to a new value.

•  As in the static channel case, the channel realization and corresponding channel capacity is not known at the transmitter, yet the transmitter must still fix a transmission rate to send data over the channel.

•  For any choice of this rate C, there will be an outage probability associated with C, which defines the probability that the transmitted data will not be received correctly.

•  The outage capacity can sometimes be improved by not allocating power to one or more of the transmit antennas, especially when the outage probability is high. This is because outage capacity depends on the tail of the probability distribution.

•  With fewer antennas, less averaging takes place and the spread of the tail increases.

14/11/24 22

Page 23: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

MIMO Channel Capacity •  The capacity with outage of a 4 × 4 MIMO system with i.i.d. complex

Gaussian channel gains is shown in Figure 10.5

14/11/24 23

Page 24: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

MIMO Channel Capacity

14/11/24 24

Page 25: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

MIMO Diversity Gain: Beamforming •  The multiple antennas at the transmitter and receiver can be used to

obtain diversity gain instead of capacity gain. •  In this setting, the same symbol, weighted by a complex scale factor,

is sent over each transmit antenna, so that the input covariance matrix has unit rank.

•  This scheme is also referred to as MIMO beamforming. •  A beamforming strategy corresponds to the precoding and receiver

matrices described in previous being just column vectors: V = v and U = u, as shown in Figure 10.7.

•  In the figure, the transmit symbol x is sent over the ith antenna with weight . On the receive side, the signal received on the ith antenna is weighted by .

•  Both transmit and receive weight vectors are normalized so that •  The resulting received signal is given by

viui

kuk = kvk = 1

(10.20)

where if has i.i.d. elements. n = (n1, · · · , nMr )14/11/24 25

Page 26: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

MIMO Diversity Gain: Beamforming

•  Beamforming provides diversity gain by coherent combining of the multiple signal paths. Channel knowledge at the receiver is typically assumed since this is required for coherent combining.

14/11/24 26

Page 27: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

MIMO Diversity Gain: Beamforming •  The diversity gain then depends on whether or not the channel is known

at the transmitter. •  When the channel matrix H is known, the received SNR is optimized by

choosing u and v as the principal left and right singular vectors of the channel matrix H.

•  The corresponding received SNR can be shown to equal , where is the largest eigenvalue of the Wishart matrix .

•  The resulting capacity is , corresponding to the capacity of a SISO channel with channel power gain .

•  When the channel is not known at the transmitter, the transmit antenna weights are all equal, so the received SNR equals , where u is chosen to maximize γ.

•  Clearly the lack of transmitter CSI will result in a lower SNR and capacity than with optimal transmit weighting.

� = ⇢�max

�max W = HHH

�max

C = B log

2

(1 + �max

⇢)

� = kHu⇤k

14/11/24 27

Page 28: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

Diversity-Multiplexing Tradeoffs •  So far, we already knew that there are two mechanisms for utilizing

multiple antennas to improve wireless system performance. •  One option is to obtain capacity gain by decomposing the MIMO channel

into parallel channels and multiplexing different data streams onto these channels. This capacity gain is also referred to as a multiplexing gain.

•  It is not necessary to use the antennas purely for multiplexing or diversity. •  Some of the space-time dimensions can be used for diversity gain, and the

remaining dimensions used for multiplexing gain. •  This gives rise to a fundamental design question in MIMO systems: should

the antennas be used for diversity gain, multiplexing gain, or both? •  The diversity/multiplexing tradeoff or, more generally, the tradeoff

between data rate, probability of error, and complexity for MIMO systems has been extensively studied in the literature, from both a theoretical perspective and in terms of practical space-time code designs.

•  This work has primarily focused on block fading channels with receiver CSI only since when both transmitter and receiver know the channel the tradeoff is relatively straightforward.

14/11/24 28

Page 29: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

Diversity-Multiplexing Tradeoffs •  Antenna subsets can first be grouped for diversity gain and then the

multiplexing gain corresponds to the new channel with reduced dimension due to the grouping.

•  For finite blocklengths it is not possible to achieve full diversity and full multiplexing gain simultaneously, in which case there is a tradeoff between these gains.

•  A transmission scheme is said to achieve multiplexing gain r and diversity gain d if the data rate (bps) per unit Hertz R(SNR) and probability of error as functions of SNR satisfy Pe(SNR)

(10.21)

(10.22)

14/11/24 29

Page 30: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

Diversity-Multiplexing Tradeoffs •  For each r the optimal diversity gain is the maximum the

diversity gain that can be achieved by any scheme. •  It is shown that if the fading block length exceeds the total number of

antennas at the transmitter and receiver, then

dopt

(r)

(10.23)

•  The function (10.23) is plotted in Fig. 10.8.

14/11/24 30

Page 31: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

Space-Time Modulation and Coding •  Since a MIMO channel has input-output relationship , the

symbol transmitted over the channel each symbol time is a vector rather than a scalar, as in traditional modulation for the SISO channel.

•  Moreover, when the signal design extends over both space (via the multiple antennas) and time (via multiple symbol times), it is typically referred to as a space-time code.

•  Most space-time codes are designed for quasi-static channels where the channel is constant over a block of T symbol times, and the channel is assumed unknown at the transmitter.

•  Let denote the channel input matrix with ith column equal to the vector channel input over the ith transmission time.

•  Let denote the channel output matrix with ith column equal to the vector channel output over the ith transmission time.

•  Let denote the noise matrix with ith column equal to the receiver noise vector on the ith transmission time.

y = Hx+ n

X = [x1, . . . ,xT ] Mt ⇥ Txi

Y = [y1, . . . ,yT ] Mt ⇥ Tyi

niN = [n1, . . . ,nT ] Mr ⇥ T

14/11/24 31

Page 32: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

ML Detection and Pairwise Error Probability •  With this matrix representation the input-output relationship over all

T blocks becomes (10.24)

•  Assume a space-time code where the receiver has knowledge of the channel matrix H. Under ML detection and given received matrix Y, the ML transmit matrix satisfies X̂

(10.25)

where denotes the Frobenius norm of the matrix A and the minimization is taken over all possible space time input matrices .

kAkFX T

•  The pairwise error probability for mistaking a transmit matrix X for another matrix , denoted as , depends only on the distance between the two matrices after transmission through the channel and the noise power, i.e.

X̂ p(X̂ ! X)

14/11/24 32

Page 33: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

(10.26)

Let denote the difference matrix between X and . Applying the Chernoff bound to (10.26) yields

Dx

= X� X̂ X̂

(10.27)

•  Let denote the ith row of H, . Then hi i = 1, . . . ,Mr

(10.28)

•  Let where vec(A) is defined as the vector that results from stacking the columns of matrix A on top of each other to form a vector.

ML Detection and Pairwise Error Probability

14/11/24 33

H = vec(HT )T

Page 34: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

•  So is a vector of length . Also define , where ⊗ denotes the Kronecker product. With these definitions,

MtMrHT DX = IMr ⌦Dx

(10.29)

•  Substituting (10.29) into (10.27) and taking the expectation relative to all possible channel realizations yields

(10.30)

•  Suppose that the channel matrix H is random and spatially white, so that its entries are i.i.d. zero-mean unit variance complex Gaussian random variables. Then taking the expectation yields

where

(10.31)

ML Detection and Pairwise Error Probability

14/11/24 34

kHDx

k2F = kHDx

k2 = HDx

DHx

HH

p(X ! X̂) ✓det

IMtMr +

1

4�2n

DHx

HHHDx

�◆�1

� = 1P DH

x

Dx

p(X ! X̂) ✓det

IMt +

P�

4�2n

�◆�Mr

Page 35: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

(10.32)

(10.33)

•  Rank and Determinant Criterion: The pairwise error probability in (10.33) indicates that the probability of error decreases as for ��d d = N�Mr

•  Thus, is the diversity gain of the space-time code. The maximum diversity gain possible through coherent combining of transmit and receive antennas is .

N�Mr

MrMt

MrMt

ML Detection and Pairwise Error Probability

14/11/24 35

� = P�2n

p(X ! X̂) N�Y

k=1

✓1

1 + ��k(�)/4

◆Mr

⇣�4

⌘�N�Mr

Page 36: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

•  To obtain this maximum diversity gain, the space-time code must be designed such that the difference matrix Δ between any two code words has full rank equal to . This design criterion is referred to as the rank criterion.

•  The coding gain associated with the pairwise error probability in (10.33) depends on the first term

•  A high coding gain is achieved by maximizing the minimum of the

determinant of Δ over all input matrix pairs X and . This criterion is referred to as the determinant criterion.

ML Detection and Pairwise Error Probability

Mt ⇥Mt

Mt

14/11/24 36

Page 37: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

Spatial Multiplexing and BLAST Architectures

•  In order to get full diversity order an encoded bit stream must be transmitted over all transmit antennas. This can be done through a serial encoding, illustrated in Figure 10.9.

Mt

14/11/24 37

Page 38: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

•  A simpler method to achieve spatial multiplexing, pioneered at Bell Laboratories as one of the Bell Labs Layered Space Time (BLAST) architectures for MIMO channels, is parallel encoding, illustrated in Figure 10.10.

Spatial Multiplexing and BLAST Architectures

14/11/24 38

Page 39: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

Spatial Multiplexing and BLAST Architectures

14/11/24 39

Page 40: Lecture’7 ’MIMO’Communicaonsweber.itn.liu.se/~vanan11/TNK080/Lecture7.pdfLecture’7:’MIMO’Communicaons • Or simply as • We assume a channel bandwidth of B and complex

Lecture  7:  MIMO  Communica2ons

Spatial Multiplexing and BLAST Architectures

14/11/24 40