Upload
farshoukh
View
232
Download
1
Embed Size (px)
Citation preview
8/13/2019 Chapter 8-b Lossy Compression Algorithms
1/18
Chapter 5: Lossy Compression
Algorithms
Multimedia Systems (ITEC 305)
Dr. Abir Akram EL ABED
8/13/2019 Chapter 8-b Lossy Compression Algorithms
2/18
Introduction Lossless compression algorithms do not deliver compression ratios that are
high enough
Most multimedia compression algorithms are lossy What is lossycompression?
Compressed data is not the same as the original data, but a close
approximation of it
Yields a much higher compression ratio than that of lossless compression
2
A general
compression
system
model
8/13/2019 Chapter 8-b Lossy Compression Algorithms
3/18
Techniques
3
Source coding is based on changing the content of the original signal
Compression rates may be high but at a price of loss of information Good compression rates may be achieved with source encoding with
(occasionally) lossless or (mostly) little perceivable loss of information
There are three broad methods that exist:
1) Transform Coding Transformation from one domain: time (e.g. 1D audio,video:2D imagery
over time) or Spatial (e.g. 2D imagery) domain to the frequency domain via
Discrete Cosine Transform (DCT) Heart of JPEG and MPEG Video,
MPEG Audio
Fourier Transform (FT)
MPEG Audio
2) Differential Encoding
3) Vector Quantisation
8/13/2019 Chapter 8-b Lossy Compression Algorithms
4/18
Distortion Measures
4
The three most commonly used distortion measures in image compression
are:
1) Mean Square Error(MSE) 2
wherexn,yn, andNare the input data sequence, reconstructed datasequence, and length of the data sequence respectively
2) Signal to noise ratio (SNR), in decibel units (dB),
where is the average square value of the original data sequence
and is the MSE
3) Peak signal to noise ratio (PSNR),
2 2
1
1 ( )N
n n
n
x yN
2
x2
d
2
10 210log x
d
SNR
2
10 210log
peak
d
x
PSNR
8/13/2019 Chapter 8-b Lossy Compression Algorithms
5/18
Quantization
5
Reduce the number of distinct output values to a much
smaller set
Main source of the lossin lossy compression
Three different forms of quantization:
1) Uniform: midrise and midtread quantizers
2) Nonuniform: companded quantizer
3) Vector Quantization
8/13/2019 Chapter 8-b Lossy Compression Algorithms
6/18
Quantization
6
A uniform scalar quantizer partitions the domain of
input values into equally spaced intervals, except
possibly at the two outer intervals
Output or reconstruction value to each interval =
midpoint of the interval
Length of each interval = step size ()
Two types of uniform scalar quantizers:
1) Midrise quantizers have even numberof output
levels
= 1 => = .
2) Midtread quantizers have odd number of output
levels, including zero as one of them
= 1 => = + .
Midrise quantizers
Midtread quantizers
8/13/2019 Chapter 8-b Lossy Compression Algorithms
7/18
1) Uniform Scalar Quantization
7
A uniform scalar quantizer partitions the domain of
input values into equally spaced intervals, except
possibly at the two outer intervals
Output or reconstruction value to each interval =
midpoint of the interval
Length of each interval = step size ()
Two types of uniform scalar quantizers:
1) Midrise quantizers have even numberof output
levels
= 1 => = .
2) Midtread quantizers have odd number of output
levels, including zero as one of them
= 1 => = + .
Midrise quantizers
Midtread quantizers
8/13/2019 Chapter 8-b Lossy Compression Algorithms
8/18
2) Companded quantizer
8
Companded quantization is nonlinear
A compander consists of a compressor function G, a
uniform quantizer, and an expander function G1
The two commonly used companders are the -law and A-
law companders
Companded
8/13/2019 Chapter 8-b Lossy Compression Algorithms
9/18
3) Vector Quantization (VQ)
9
According to Shannons original work on information theory, any compression
system performs better if it operates on vectors or groups of samples rather than
individual symbols or samples Form vectors of input samples by simply concatenating a number of consecutive
samples into a single vector
Instead of single reconstruction values as in scalar quantization, in VQ code
vectorswith ncomponents are used
A collection of these code vectors form the codebook
Basic vector
quantization
procedure
8/13/2019 Chapter 8-b Lossy Compression Algorithms
10/18
ec or uan sa onEncoding/Decoding
10
Search Engine:
Group (Cluster) data into vectors
Find closest code vectors
8/13/2019 Chapter 8-b Lossy Compression Algorithms
11/18
Example
11
Consider Vectors of 2x2 blocks, and only allow 8 codes in
table
9 vector blocks present in above
9 vector blocks, so only one has to be vector quantisedhere
Resulting code book for above image
A small block of
images and intensity
values
8/13/2019 Chapter 8-b Lossy Compression Algorithms
12/18
Transform Coding
12
The rationale behind transform coding:
If Yis the result of a linear transform Tof the input vector Xin
such a way that the components of Y are much less
correlated, then Ycan be coded more efficiently than X.
If most information is accurately described by the first few
components of a transformed vector, then the remaining
components can be coarsely quantized, or even set tozero, with little signal distortion.
Discrete Cosine Transform (DCT) will be studied first
8/13/2019 Chapter 8-b Lossy Compression Algorithms
13/18
Spatial Frequency and DCT
13
Spatial frequency indicates how many times pixel values change across an
image block
DCT formalizes this notion with a measure of how much the image contents
change in correspondence to the number of cycles of a cosine wave per block
Role of DCT is to decompose the original signal into its DC and AC
components
Role of IDCT is toreconstruct(re-compose) the signal
Definition of DCT:
Given an input function f(i, j) over two integer variables i and j (a piece of an
image)
2D DCT transforms it into a new function F(u, v), with integer uand vrunning
over the same range as iandj. The general definition of the transform is:
i, u= 0, 1, . . . ,M1; j, v= 0, 1, . . . ,N1; and constants C(u) and C(v) are
determined by
1 1
0 0
2 ( ) ( ) (2 1) (2 1)( , ) cos cos ( , )
2 2
M N
i j
C u C v i u j vF u v f i j
M NMN
2 0,( ) 2
1 .
ifC
otherwise
8/13/2019 Chapter 8-b Lossy Compression Algorithms
14/18
2D Discrete Cosine Transform (2D DCT)
where i,j, u, v= 0, 1, . . . , 7
2D Inverse Discrete Cosine Transform (2DIDCT)
The inverse function is almost the same, with the roles of f(i, j)andF(u, v) reversed, except that now C(u)C(v) must stand inside
the sums:
wherei,j, u, v= 0, 1, . . . , 7.14
7 7
0 0
( ) ( ) (2 1) (2 1)( , ) cos cos ( , )
4 16 16i j
C u C v i u j vF u v f i j
7 7
0 0
( ) ( ) (2 1) (2 1)( , ) cos cos ( , )4 16 16
u v
C u C v i u j vf i j F u v
8/13/2019 Chapter 8-b Lossy Compression Algorithms
15/18
1D Discrete Cosine Transform (1D DCT):
where i= 0, 1, . . . , 7, u= 0, 1, . . . , 7.
1D Inverse Discrete Cosine Transform (1DIDCT):
where i= 0, 1, . . . , 7, u= 0, 1, . . . , 7.15
7
0
( ) (2 1)( ) cos ( )2 16
i
C u i uF u f i
7
0
( ) (2 1)( ) cos ( )2 16u
C u i uf i F u
8/13/2019 Chapter 8-b Lossy Compression Algorithms
16/18
1DD
CTbasis
functions
16
8/13/2019 Chapter 8-b Lossy Compression Algorithms
17/18
Graphical Illustration of
8 8 2D DCT basis
17
8/13/2019 Chapter 8-b Lossy Compression Algorithms
18/18
Exampleso
f1DDiscreteC
osineTransfor
m:(a)ADC
signalf
1(i),
(b)AnAC
sign
alf
2(i),(c)f3(
i)
=f1
(i)+f2
(i),
and(d)ana
rbitrarysignal
f(i)..
18
(a)
(b)
(c)
(d)