32
Discrete Cosine Transform Discrete Cosine Transform Nuno Vasconcelos UCSD

Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Discrete Cosine TransformDiscrete Cosine Transform

Nuno VasconcelosUCSD

Page 2: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Discrete Fourier Transform• last classes, we have studied the DFT• due to its computational efficiency the DFT is very

popular• however, it has strong disadvantages for some

applicationsapplications– it is complex– it has poor energy compaction

• energy compaction– is the ability to pack the energy of the spatial sequence into as

few frequency coefficients as possibleq y p– this is very important for image compression– we represent the signal in the frequency domain

if compaction is high we only have to transmit a few coefficients

2

– if compaction is high, we only have to transmit a few coefficients– instead of the whole set of pixels

Page 3: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Discrete Cosine Transform• a much better transform,

from this point of view, is the DCT– in this example we see the

amplitude spectra of the image above– under the DFT and DCT– note the much more

concentrated histogram obtained with the DCT

• why is energy compactionimportant?– the main reason is– the main reason is

image compression– turns out to be beneficial

in other applications

3

in other applications

Page 4: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Image compression• an image compression system has three main blocks

– a transform (usually DCT on 8x8 blocks)– a quantizer

( )– a lossless (entropy) coder

• each tries to throw away information which is not essential to understand the image, but costs bits

4

g ,

Page 5: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Image compression• the transform throws away correlations

– if you make a plot of the value of a pixel as a function of one of its neighborsits neighbors

– you will see that the pixels are highly correlated (i.e. most of the time they are very similar)

– this is just a consequence of the fact that surfaces are smooth

5

j q

Page 6: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Image compression• the transform eliminates these correlations

– this is best seen by considering the 2-pt transform– note that the first coefficient is always the DC-value

[ ] ]1[]0[0 xxX +=

– an orthogonal transform can be written in matrix form as

ITTTxX T == ,– i.e. T has orthogonal columns– this means that

[ ] ]1[]0[1 xxX −=

– note that if x[0] similar to x[1], then

[ ] ]1[]0[1 xxX =

[ ]⎧ ≈+= ]0[2]1[]0[0 xxxX

6

[ ][ ]⎩

⎨⎧

≈−=≈+=

0]1[]0[1]0[2]1[]0[0

xxXxxxX

Page 7: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Image compression• the transform eliminates these correlations

– note that if x[0] similar to x[1], the

[ ][ ]⎩

⎨⎧

≈−=≈+=

0]1[]0[1]0[2]1[]0[0

xxXxxxX

– in the transform domain we only have to transmit one numberwithout any significant cost in image qualityby “decorrelating” the signal we reduced the bit rate to ½!– by decorrelating” the signal we reduced the bit rate to ½!

– note that an orthogonal matrix

ITT T =applies a rotation to the pixel space

– this aligns the data with the canonical axes

7

Page 8: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Image compression• a second advantage of working in the

frequency domain– is that our visual system is less sensitive

to distortion around edges– the transition associated with the edge

masks our ability to perceive the noise– e.g. if you blow up a compressed picture,

it is likely to look like this– in general, the

compression errors are moreannoying in thesmooth imageregions

8

Page 9: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Image compression• three JPEG examples

36KB 5.7KB 1.7KB– note that the blockiness is more visible in the torso

9

Page 10: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Image compression• important point: by itself, the transform

– does not save any bits– does not introduce any distortion

• both of these happen when we throw away information• this is called “lossy compression” and implemented by• this is called lossy compression and implemented by

the quantizer • what is a quantizer?

– think of the round() function, that rounds to the nearest integer– round(1) = 1; round(0.55543) = 1; round (0.0000005) = 0– instead of an infinite range between 0 and 1 (infinite number ofinstead of an infinite range between 0 and 1 (infinite number of

bits to transmit) – the output is zero or one (1 bit)

we threw away all the stuff in between but saved a lot of bits

10

– we threw away all the stuff in between, but saved a lot of bits– a quantizer does this less drastically

Page 11: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Quantizer• it is a function of this type

– inputs in a given range are mappedto the same o tp tto the same output

• to implement this, we – 1) define a quantizer step size Q) q p– 2) apply a rounding function

⎞⎜⎜⎛

=xroundx

– the larger the Q, the less reconstruction levels we have

⎠⎜⎜⎝

=Q

roundxq

g– more compression at the cost of larger distortion– e.g. for x in [0,255], we need 8 bits and have 256 color values– with Q = 64 we only have 4 levels and only need 2 bits

11

– with Q = 64, we only have 4 levels and only need 2 bits

Page 12: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Quantizer• note that we can quantize some frequency coefficients

more heavily than others by simply increasing Q• this leads to the idea of a quantization matrix• we start with an image block (e.g. 8x8 pixels)

12

Page 13: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Quantizer• next we apply a transform (e.g. 8x8 DCT)

DCT

13

Page 14: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Quantizer• and quantize with a varying Q

DCT

Q mtx

14

Page 15: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Quantizer• note that higher frequencies are quantized more heavily

Q mtxQ mtx increasing frequency

– in result many high frequency coefficients are simply wiped out– in result, many high frequency coefficients are simply wiped outDCT quantized DCT

15

Page 16: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Quantizer• this saves a lot of bits, but we no longer have an exact

replica of original image blockDCT quantized DCTDCT quantized DCT

inverse DCT original pixels

16

Page 17: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Quantizer• note, however, that visually the blocks are not very

differenti i l d doriginal decompressed

f “ ”– we have saved lots of bits without much “perceptual” loss– this is the reason why JPEG and MPEG work

17

Page 18: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Image compression• three JPEG examples

36KB 5.7KB 1.7KB– note that the two images on the left look identical

18– JPEG requires 6x less bits

Page 19: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Discrete Cosine Transform• note that

– the better the energy compaction– the larger the number of coefficients

that get wiped out– the greater the bit savings for the same

loss

• this is why the DCT isimportantimportant

• we will do mostly the 1D-DCT– the formulas are simpler

the insights the same– as always, extension to

19

2D is trivial

Page 20: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Discrete Cosine Transform• the first thing to note is that there are various versions of

the DCT– these are usually known as DCT-I to DCT-IV– they vary in minor details– the most popular is the DCT-II,

01⎪⎧ kalso known as even symmetric DCT, or as “the DCT”

⎧ ⎞⎛−N 1

[ ] ,1,1

0,21

⎪⎩

⎪⎨⎧

<≤==

Nkkkw

[ ] [ ]⎪⎩

⎪⎨⎧

<≤⎟⎠⎞

⎜⎝⎛ += ∑

=

otherwise

NknkN

nxkCN

nx

0

0,)12(2

cos21

0

π

[ ] [ ]⎪

⎪⎨⎧

<≤⎟⎠⎞

⎜⎝⎛ +=

∑−

=

NnnkN

kCkwNnx

N

kx 0)12(

2cos][1 1

0

π

20

⎪⎩ otherwise0

Page 21: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Discrete Cosine Transform

[ ] [ ]⎪⎨⎧

<≤⎠⎞

⎜⎝⎛ += ∑

NknkN

nxkCN

0,)12(2

cos21 π

[ ] [ ]⎪⎩⎨ ⎠⎝= ∑

=

otherwiseNkC nx

020

• from this equation we can immediately see that the DCT coefficients are real

• to understand the better energy compactionto understand the better energy compaction – it is interesting to compare the DCT to the DFT– it turns out that there is a simple relationship

• we consider a sequence x[n] which is zero outside of {0, …, N-1}

• to relate DCT to DFT we need three steps

21

• to relate DCT to DFT we need three steps

Page 22: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Discrete Cosine Transform• step 1): create a sequence

[ ] −−+= nNxnxny ]12[][

⎩⎨⎧

<≤−−<≤

=NnNnNx

Nnnx2],12[

0],[

• step 2): compute the 2N-point DFT of y[n]

[ ] NkkYN kn

Nj

20][12

22

≤∑− −

π

• step 3): rewrite as a function of N terms only

[ ] NkenykYn

N 20 ,][0

2 <≤= ∑=

p ) y

[ ] ∑∑−

=

−−

=

−+=

12221

0

22

][ ][N

Nn

knN

jN

n

knN

jenyenykY

ππ

22

Page 23: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Discrete Cosine Transform• step 3): rewrite as a function of N terms only

[ ] ∑∑− −− − 12

221

22

]12[][N kn

NjN kn

Nj

NkYππ

[ ]

( )∑∑==

=−−=−−==

−−+= 2

0

2

12,12

]12[][Nn

N

n

N

mNnnNm

enNxenxkY

∑∑−

=

−−−−

=

⎫⎧

+=

2222

1

0

)12(221

0

22

][ ][ N

m

mNkN

jN

n

knN

jemxenx

ππ

∑−

=

−−

⎫⎧⎭⎬⎫

⎩⎨⎧

+=

222

1

0

22

1

222

22

22

][ N

n

kN

jNkN

jknN

jknN

jeeeenx

ππππ

43421

∑−

=

⎭⎬⎫

⎩⎨⎧

+=1

0

22

22

22

][ N

n

kN

jknN

jknN

jeeenx

πππ

23– to write as a cosine we need to make it into two “mirror”

exponents

Page 24: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Discrete Cosine Transform• step 3): rewrite as a function of N terms only

[ ] ∑− −

⎬⎫

⎨⎧1

22

22

22

][N k

Njkn

Njkn

Nj

kYπππ

[ ]

∑− −−

=

⎬⎫

⎨⎧

⎭⎬⎫

⎩⎨⎧

+=

122

222

22

0

222

][

][

N kN

jknN

jkN

jknN

jkN

j

n

Nj

Nj

Nj

eeenxkY

πππππ

∑−

=

⎞⎜⎛ +

⎭⎬

⎩⎨ +=

12

0

22222

)12(cos][2

][

N kN

j

n

NNNNN

nkenx

eeeeenx

ππ

∑−

=

⎞⎜⎛ +=

⎠⎜⎝

+=

12

0

2

)12(cos][2

)12(2

cos][2

NkN

j

n

N

nknxe

nkN

enx

ππ

– from which

∑= ⎠

⎜⎝

+0

)12(2

cos][2 n

nkN

nxe

kj π

24

[ ] NkkCekY x

kN

j20 ],[ 2 <≤=

Page 25: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Discrete Cosine Transform• it follows that

⎪⎧ kj π

[ ] ,0

0,][ 2

⎪⎩

⎪⎨⎧

<≤=−

otherwiseNkkYekC

kN

j

x

π

• in summary, we have three steps

[ ] [ ] [ ] [ ]DFT

[ ]{ [ ]{ [ ]{ [ ]321ptN

xptNptNptN

kCkYnynx−−−−

↔↔↔22

• this interpretation is useful in various ways– it provides insight on why the DCT has better energy compaction

it provides a fast algorithm for the computation of the DFT

25

– it provides a fast algorithm for the computation of the DFT

Page 26: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Energy compaction

[ ]{ [ ]{ [ ]{ [ ]31x

DFTkCkYnynx ↔↔↔

• to understand the energy compaction propertyt t b id i th [ ] [ ] [2N 1 ]

{ { { 321ptNptNptNptN −−−− 22

– we start by considering the sequence y[n] = x[n]+x[2N-1-n]– this just consists of adding a mirrored version of x[n] to itself

x[n] y[n]

– next we remember that the DFT is identical to the DFS of the

x[n]

next we remember that the DFT is identical to the DFS of the periodic extension of the sequence

– let’s look at the periodic extensions for the two cases• when transform is DFT: we work with extension of x[n]

26

• when transform is DFT: we work with extension of x[n]• when transform is DCT: we work with extension of y[n]

Page 27: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Energy compaction

[ ]{ [ ]{ [ ]{ [ ]31x

DFTkCkYnynx ↔↔↔

• the two extensions are

{ { { 321ptNptNptNptN −−−− 22

DFT DCT

– note that in the DFT case the extension introduces discontinuities

– this does not happen for the DCT, due to the symmetry of y[n]– the elimination of this artificial discontinuity, which contains a lot

of high frequencies

27

of high frequencies, – is the reason why the DCT is much more efficient

Page 28: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

Fast algorithms• the interpretation of the DCT as

[ ] [ ] [ ] [ ]DFT

kCkY ↔↔↔

[ ]{ [ ]{ [ ]{ [ ]321ptN

xptNptNptN

kCkYnynx−−−−

↔↔↔22

– also gives us a fast algorithm for its computation– it consists exactly of the three steps– 1) y[n] = x[n]+x[2N-1-n]– 1) y[n] = x[n]+x[2N-1-n]– 2) Y[k] = DFT{y[n]}

this can be computed with a 2N-pt FFT3) ⎧– 3)

[ ] ,0

0,][ 2

⎪⎩

⎪⎨⎧

<≤=−

otherwiseNkkYekC

kN

j

x

π

28– the complexity of the N-pt DCT is that of the 2N-pt DFT

Page 29: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

2D DCT• the extension to 2D is trivial• the procedure is the same

[ ] [ ] [ ] [ ]43421434214342143421

ptNN

x

ptNN

DFT

ptNNptNN

kkCkkYnnynnx−×−×−×−×

↔↔↔21212121

21

22

21

2D

22

2121 ,,,,

• withpppp 21212121

[ ] [ ] [ ]2112121 ,12,, nnNxnnxnny −−+=

• and

[ ] [ ] [ ][ ] [ ]2211221

2112121

12,1212, ,,,

nNnNxnNnxy

−−−−+−−+

and

[ ] 00

,,],[22

1121

22

21

22

11⎪

⎨⎧

<≤<≤

=−−

NkNk

kkYeekkCk

Njk

Nj

x

ππ

29,0

0],[2221

⎪⎩

⎨ <≤otherwise

Nkx

Page 30: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

2D DCT• the end result is the 2D DCT pair

[ ]⎪⎧ <≤⎞

⎜⎜⎛

+⎞

⎜⎜⎛

+∑∑− − Nk

nknknnxN N 0

)12()12(4 111 11 2 ππ

[ ] [ ]

[ ] [ ]⎪⎧ <≤⎞

⎜⎜⎛

+⎞

⎜⎜⎛

+

⎪⎩

⎪⎨ <≤⎠

⎜⎜⎝

+⎠

⎜⎜⎝

+=

∑∑

∑∑

− −

= =

NnnknkkkCkwkw

otherwiseNk

nkN

nkN

nnxkkC

N N

n nx

0)12(cos)12(cos][][1

00

,)12(2

cos)12(2

cos,4,

111 1

2222

211

10 021

21

1 2

1 2

ππ

with

[ ] [ ]⎪⎩

⎪⎨ <≤⎠

⎜⎜⎝

+⎠

⎜⎜⎝

+= ∑∑= =

otherwiseNn

nkN

nkN

kkCkwkwNNnnx k k

x

00

)12(2

cos)12(2

cos,][][,22

222

1110 0

2122112121 1 2

with

[ ] [ ]⎪⎩

⎪⎨⎧

<≤=

=⎪⎩

⎪⎨⎧

<≤=

=22

222

11

111 1,1

0,21

,1,1

0,21

Nkkkw

Nkkkw

• it is possible to show that the 2DCT can be computed with the row-column decomposition (homework)

⎩⎩

30

p ( )

Page 31: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

2D-DCT• 1) create

intermediate sequence by

n2n2

sequence by computing 1D-DCT of

1D-DCT

rows

• 2) compute

k1n1

],[ 21 nkf],[ 21 nnx

2) compute 1D-DCT of columns

n2k2

1D-DCT

k1k1

1D DCT

31

1

],[ 21 nkf ],[ 21 kkC x

Page 32: Nuno Vasconcelos UCSD - SVCL - Statistical Visual Computing Lab

32