170
I A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF MULTIMODAL MEDICAL IMAGES THESIS Submitted by B. PERUMAL (Reg. No. 201111204) In partial fulfillment for the award of the degree of DOCTOR OF PHILOSOPHY IN ELECTRONICS & INSTRUMENTATION ENGINEERING DEPARTMENT OF ELECTRONICS & INSTRUMENTATION ENGINEERING KALASALINGAM UNIVERSITY (KALASALINGAM ACADEMY OF RESEARCH AND EDUCATION) ANAND NAGAR, KRISHNANKOIL – 626 126 JULY 2016

A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

I

A HYBRID APPROACH FOR EFFICIENT

COMPRESSION OF MULTIMODAL MEDICAL

IMAGES

THESIS

Submitted by

B. PERUMAL (Reg. No. 201111204)

In partial fulfillment for the award of the degree

of

DOCTOR OF PHILOSOPHY

IN ELECTRONICS & INSTRUMENTATION ENGINEERING

DEPARTMENT OF ELECTRONICS & INSTRUMENTATION

ENGINEERING

KALASALINGAM UNIVERSITY (KALASALINGAM ACADEMY OF RESEARCH AND EDUCATION)

ANAND NAGAR, KRISHNANKOIL – 626 126 JULY 2016

Page 2: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed
Page 3: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

CERTIFICATE

This is to certify that all corrections and suggestions pointed out by the Indian/Foreign

Examiner(s) are incorporated in the Thesis titled “A Hybrid Approach for Efficient

Compression of Multimodal Medical Images” submitted by Mr.B.Perumal (Reg.No.

201111204)

SUPERVISOR

Place: Krishnankoil

Date: 17.09.2016

Page 4: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed
Page 5: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

II

KALASALINGAM UNIVERSITY

(Kalasalingam Academy of Research and Education)

ANAND NAGAR, KRISHNANKOIL-626 126

BONAFIDE CERTIFICATE

Certified that this thesis titled, “A HYBRID APPROACH FOR

EFFICIENT COMPRESSION OF MULTIMODAL MEDICAL

IMAGES” is the bonafide work of Mr. PERUMAL.B, who carried out the

research under my supervision. Certified further that, to the best of my

knowledge the work reported herein does not form a part of any other thesis or

dissertation on the basis of which a degree or award was conferred on an earlier

occasion on this or any other scholar.

Dr. M. PALLIKONDA RAJASEKARAN SUPERVISOR, Professor, Department of Electronics and Communication Engineering, Kalasalingam University, Anand Nagar, Krishnankoil – 626 126

Page 6: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

III

ABSTRACT

Medical imaging plays a vital role in giving medical assistance that

gives diagnostic information for clinical management of the patients and offering

suitable treatment. Every year, terabytes of medical image data’s square measure

is used through progressive imaging modalities like Positron Emission

Tomography (PET), Magnetic Resonance Imaging (MRI), Computed

Tomography (CT), and lots of additional new methodology of medical imaging.

Improvements in technology have given the chance for radiology systems to use

intricate compression algorithms to scale back the file size of every image in an

endeavor to raise the knowledge volume shaped by new or additional intricate

modalities. In General, various compression strategies like Discrete Cosine

Transform (DCT), Discrete Wavelet Transform (DWT), Fractal Compression,

Set Partitioning In Hierarchical Trees (SPIHT), Neural Network Back

Propagation (NNBP) and Radial Basis Function Neural Network (RBFNN) are

applied to medical images. Currently, evolving hybrid schemes for effective

image compression has gained immense admiration among the researchers. The

hybrid technique affords well-organized and precise coding of the medical

images. An efficient compression technique like Hybrid DWT with BPNN and

Hybrid Fractal with NNRBF for compression of the medical data is able to

resolve the complications with storage and transmission. The latest compression

schemes bring better compression rates if the loss of quality is affordable.

Medicine cannot afford insufficiency in diagnostically significant Region of

Interest (ROI). An approach that brings high compression rate with good quality

in the ROI is required. A hybrid coding scheme seems to be the only solution to

this twofold problem. The objective of this thesis is to compare a few basic

compression techniques with Hybrid DWT with BPNN and Hybrid Fractal with

NNRBF for compression. There are different parameters for analyzing the image

compression methods, which include Compression Ratio (CR), Peak Signal to

Noise Ratio (PSNR), Bits per pixel (Bpp) and Mean Square Error (MSE). The

Page 7: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

IV

quality of any compressed image can be assessed using this set of parameters.

The result clearly shows that hybrid image compression using Hybrid Fractal

with NNRBF provides better CR and PSNR. The mentioned analyses are carried

out in MATLAB simulations.

Page 8: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

ACKNOWLEDGEMENT

Though only my name appears on the cover of this thesis, a great

many people have contributed to its production. I owe my gratitude to all those

people who have made this thesis possible and because of whom my research

experience has been one that I will cherish forever.

My deep felt gratitude goes to our respected and honorable Chairman

(Late) “Kalvivallal” Thiru. T. Kalasalingam, B.Com., for providing the

technical environment to complete my project successfully.

I am highly indebted to express my token of thanks to our beloved

Chancellor “Ilayavallal” Dr. K. Sridharan, M.Com., M.B.A., M.Phil., P.hD.,

for allowing me to do the project work.

I thank our Director Dr.S.Shasi Anand, Ph.D., for being the beacon

light in guiding and infusing the strength and enthusiasm to work over

successful.

I express my sincere thanks to our Vice-Chancellor Dr.S.Saravana

Sankar, Ph.D., for his valuable suggestions and continuous encouragement in

the completion of the project work.

I am extremely grateful to my Supervisor Dr.M.Pallikonda

Rajasekaran, Ph.D., Professor, Department of Electronics and Communication

Engineering for his many thoughtful comments and valuable suggestions.

Last but not the least I thank all, my parents, my wife, my son,

teaching staff of our University, non-teaching staff, R&D Department and my

friends for their moral support.

PERUMAL B

Page 9: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

VI

TABLE OF CONTENTS

CHAPTER

NO.

TITLE PAGE

NO.

ABSTRACT

LIST OF TABLES

LIST OF FIGURES

LIST OF SYMBOLS AND ABBREVIATIONS

III

XI

XIII

XVI

1. INTRODUCTION

1.1 INTRODUCTION 1

1.2 DATA COMPRESSION 1

1.2.1 Image Compression 2

1.2.2 Compression Techniques 2

1.2.2.1 Lossless Compression 2

1.2.2.2 Lossy Compression 5

1.3 IMAGING TECHNIQUES 7

1.3.1 Computer Tomography (CT) 7

1.3.2 Magnetic Resonance Imaging (MRI) 7

1.3.3 Positron Emission Tomography (PET) 9

1.4 IMAGE COMPRESSION PERFORMANCE

METRICS

10

1.4.1 Image quality 11

1.4.1.1 Distortion 11

1.4.1.2 Fidelity or Quality 12

1.4.1.3 Compression Ratio (CR) 12

1.4.1.4 Bits per pixel (Bpp) 12

1.4.1.5 Speed of Compression 13

1.5 THE COMPRESSION SYSTEM 13

1.6 OVERVIEW OF METHODOLOGY 16

Page 10: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

VII

1.7 RESEARCH MOTIVATION 16

1.8 PROBLEM STATEMENT 18

1.9 OBJECTIVES OF THE RESEARCH 18

1.10 CONTRIBUTIONS OF THE THESIS 19

1.11 ORGANIZATION OF THE THESIS 19

2. LITERATURE SURVEY

2.1 INTRODUCTION 22

2.1.1 Description of image compression block

diagram

23

2.1.2 Image format 24

2.2 LITERATURE REVIEW 24

2.3 CONCLUSION 40

3. METHODOLOGIES

3.1 DISCRETE COSINE TRANSFORM (DCT) 41

3.1.1 Image Compression in DCT 42

3.1.2 DCT Encoding 43

3.1.3 Compression Steps in DCT 44

3.1.4 Quantization Steps 44

3.1.5 Entropy Encoding 45

3.2 DISCRETE WAVELET TRANSFORM (DWT) 46

3.2.1 Advantages of DWT 48

3.2.2 Wavelets used in Image Compression 48

3.2.3 Aspects of Wavelets 49

3.3 FRACTAL ALGORITHM 50

3.3.1 Presentation about Fractal Algorithm 51

3.3.2 Features of Fractal Algorithm 52

3.3.3 Fractal Image Compression 52

3.4 SET PARTITIONING IN HIERARCHICAL

TREES (SPIHT)

53

Page 11: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

VIII

3.4.1 Haar Wavelet 53

3.4.2 Formation of Cells 54

3.4.3 Zero Tree Encoding 55

3.4.4 SPIHT Algorithm 55

3.5 INTRODUCTION TO NEURAL NETWORKS 57

3.5.1 Back Propagation Neural Networks

(BPNN)

58

3.5.2 Image Compression using Back

Propagation

60

3.5.3 Use of Image Compression in Back

Propagation

61

3.5.4 Neural Network Radial Basis Function

(NNRBF)

63

3.5.4.1 Radial basis function operation 63

3.5.4.2 Output nodes 66

3.5.4.3 Training of RBF neural

networks

66

4. COMPRESSION TECHNIQUES FOR MEDICAL

IMAGES USING FRACTAL, SPIHT AND DCT

ALGORITHMS

4.1 INTRODUCTION 67

4.2 FRACTAL, SPIHT AND DCT METHODS 68

4.2.1 Fractal 68

4.2.2 Set Partitioning in Hierarchical Trees 69

4.2.3 Discrete Cosine Transform 70

4.3 IMAGE QUALITY PARAMETER

EVALUATION

71

4.3.1 Performance Parameter 72

4.4 RESULTS AND COMPARISON 73

Page 12: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

IX

4.5 CONCLUSION 83

5. EFFICIENT IMAGE COMPRESSION

TECHNIQUES FOR COMPRESSING

MULTIMODAL MEDICAL IMAGES USING

NEURAL NETWORK RADIAL BASIS

FUNCTION APPROACH

5.1 INTRODUCTION 84

5.2 ALGORITHMS FOR IMAGE COMPRESSION 85

5.2.1 Fractal Algorithm 85

5.2.2 Neural Network Back Propagation 86

5.2.3 Neural Network Radial Basis Function

for Image Compression

87

5.3 PERFORMANCE PARAMETERS 88

5.4 RESULTS AND COMPARISON 88

5.5 CONCLUSION 98

6. A HYBRID DISCRETE WAVELET

TRANSFORM WITH NEURAL NETWORK

BACK PROPAGATION APPROACH FOR

EFFICIENT MEDICAL IMAGE COMPRESSION

6.1 INTRODUCTION 99

6.2 ALGORITHMS USED 99

6.2.1 Back Propagation Neural Networks

Algorithm

99

6.2.2 Discrete Wavelet Transform 101

6.3 PERFORMANCE PARAMETERS 102

6.4 RESULTS AND DISCUSSION 102

6.5 CONCLUSION 113

7. A HYBRID APPROACH USING FRACTAL AND

NEURAL NETWORK RADIAL BASIS

Page 13: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

X

FUNCTION FOR EFFICIENT COMPRESSION

OF MULTI MODAL MEDICAL IMAGES

7.1 INTRODUCTION 114

7.2 METHODOLOGIES 115

7.2.1 Fractal Algorithm 115

7.2.2 Neural Network Radial Basis for Image

Compression

115

7.3 IMPLEMENTATION OF HYBRID

TECHNIQUES

117

7.3.1 Hybrid image compression 117

7.4 IMAGE QUALITY PARAMETER

EVALUATION

117

7.5 SIMULATION RESULTS AND ANALYSIS 118

7.6 CONCLUSION 128

8. CONCLUSION AND FUTURE WORK

8.1 CONCLUSION 130

8.2 FUTURE WORK 135

REFERENCES 136

LIST OF PUBLICATIONS 146

CURRICULUM VITAE 149

Page 14: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

XI

LIST OF TABLES

TABLE

NO.

TITLE PAGE

NO.

1.1 Run Length Encoding 3

4.1

Performance comparison of 24 medical images

which are obtained by using DCT, SPIHT and Fractal

algorithm

74

5.1

Performance comparison of 24 medical images

which are obtained by using Fractal, NNRBF and

NNBP algorithm

89

6.1

Performance comparison of 24 medical images

which are obtained by using DWT, BPNN and hybrid

DWT-BP algorithm

104

7.1

Performance comparison of 24 medical images

which are obtained by using NNRBF, Fractal and

Hybrid FNNRBF.

119

8.1 Compression ratio of 24 medical images which are

obtained by using DCT, DWT, Fractal, NNBP,

NNRBF, Hybrid Fractal and NNRBF and Hybrid

DWT-NNBP algorithm

131

8.2 Peak Signal to Noise Ratio of 24 medical images

which are obtained by using DCT, DWT, Fractal,

NNBP, NNRBF, Hybrid Fractal and NNRBF and

Hybrid DWT-NNBP algorithm

132

8.3 Memory of 24 medical images which are obtained by

using DCT, DWT, Fractal, NNBP, NNRBF, Hybrid

Fractal and NNRBF and Hybrid DWT-NNBP

algorithm

133

Page 15: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

XII

8.4 Execution time of 24 medical images which are

obtained by using DCT, DWT, Fractal, NNBP,

NNRBF, Hybrid Fractal and NNRBF and Hybrid

DWT-NNBP algorithm

134

Page 16: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

XIII

LIST OF FIGURES FIGURE

NO.

TITLE PAGE

NO.

1.1 Schematic view of an MRI scanner 8

1.2 General block diagram of compression technique 13

1.3 General block diagram of de-compression

technique

13

1.4 The compression process on forward transform 16

2.1 Block diagram of Image Compression 23

3.1 Block diagram of DCT 42

3.2 Conversion of special domain to frequency

domain

43

3.3 Block diagram of DWT 47

3.4 Three Level Decomposition Wavelet Filter 49

3.5 2-D Discrete Wavelet Transform in image

compression

50

3.6 A photo copy machine that makes three reduced

copies of the input image

53

3.7 Formation of cells of parent-offspring conditions 55

3.8 Spatial orientation tree in SPIHT 56

3.9 Block diagram of Neural Network 58

3.10 General Structure of BPNN 59

3.11 General structure of NNRBF Algorithm 65

4.1 Flow diagram of Fractal coding 68

4.2 Basic block diagram of SPIHT method 69

4.3 Formation of cells of SPIHT 69

4.4 Two-dimensional DCT of 8-by-8 blocks in the

image

71

4.5 Compression Ratio expressed in percentage 75

Page 17: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

XIV

4.6 Shows the PSNR for three different algorithms

DCT, SPIHT and Fractal

75

4.7 Memory expressed in kilo byte 76

4.8 Execution Time expressed in Seconds 76

4.9 Results obtained for various medical images (a).

Input Image (b). DCT (c). SPIHT and (d). Fractal

algorithms

77

5.1 General structure of Neural Network Back

Propagation Algorithm

87

5.2 General structure of Radial Basis Function Neural

Network

88

5.3 Compression Ratio expressed in percentage 90

5.4 Peak Signal to Noise Ratio expressed in decibels 90

5.5 Memory expressed in kilo byte 91

5.6 Execution Time expressed in Seconds 91

5.7 Results obtained for various medical images (a).

Input Images (b). Fractal, (c). Neural Network

Back Propagation (NNBP) and (d). Radial Basis

Function Neural Network algorithms

92

6.1 General Structure of BPNN 100

6.2 Block diagram of Hybrid DWT- BP algorithm 101

6.3 Comparison chart of proposed work and existing

method

102

6.4 Comparison of Compression Ratio for different

Input images

105

6.5 Comparison of PSNR Values for different Input

Image

105

6.6 Memory expressed in kilo byte 106

6.7 Execution Time expressed in Seconds 106

Page 18: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

XV

6.8 Results obtained for various medical images (a).

Input Images (b). DCT, (c). SPIHT and (d).

Fractal algorithms

107

7.1 General structure of NNRBF 116

7.2 Hybrid image compression using FNNRBF

method

117

7.3 Compression Ratio expressed in percentage 120

7.4 PSNR expressed in decibel 120

7.5 Memory expressed in kilo byte 121

7.6 Execution Time expressed in Seconds 121

7.7 Results obtained for various medical images (a).

Input Images (b). Fractal, (c). NNRBF and (d).

Hybrid Fractal & NNRBF algorithms

122

Page 19: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

XVI

LIST OF SYMBOLS AND ABBREVIATIONS

Symbols

HRL - Run - Length Entropy

H0, H1 - Entropies of the high contrast

L0, L1 - Normal values of high contrast run length

NxN - Dimensions of the images

I(i,j) - Original image

K(i,j) - Approximated version

MAXI - Maximum possible pixel value of the image

F(u,v) - DCT coefficient in row k1 and column k2 of the DCT

matrix

f(x,y) - intensity of the pixel in row i and column j

3D - Three Dimensional

W - Weights between the hidden layer and the output layer

N - Columns of input image

M - Rows of input image

A - Matrix representing the 2D image pixels

T - Consequence of the complete transformation

f (x,y) - Original image

g (x,y) - Reconstructed image

M, N - Rows and columns of input image

N - Quantity of neurons in the hidden layer

Ci - Inside vector for neuron

i, and ai - Weight of neuron

Page 20: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

XVII

Abbreviations

RLE - Run-length Encoding

LZW - Lempel-Ziv–Welch

DFT - Discrete Fourier Transform

DCT - Discrete Cosine Transform

IFS - Iterated Function System

CT - Computed Tomography

MRI - Magnetic Resonance Imaging

PET - Positron Emission Tomography

RF - Radio Frequency

MAP - Maximum a Posteriori

FBP - Filtered Back Projection

ANN-BPN - Artificial Neural Network with Back Propagation Network

SVM - Support Vector Machine

ANN-RBF - Artificial Neural Network with Radial Basis Function

MSE - Mean Square Error

PSNR - Peak Signal to Noise Ratio

CR - Compression Ratio

Bpp - Bits per pixel

SPIHT - Set Partitioning in Hierarchical Trees

NNBP - Neural Network Back Propagation

NNRBF - Neural Network Radial Basic Function

RBFNN - Radial Basis Function Neural Network

DWT-BP - Discrete Wavelet Transform-Back Propagation

NN - Neural Network

FNNRBF - Fractal based Neural Network Radial Basis Function

ISO - International Standard Organization

EZW - Embedded Zero Tree Wavelet

Page 21: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

XVIII

BPNN - Back Propagation Neural Networks

RBF - Radial Basis Function

MLP - Multi-Layer Perception

PIFS - Partition Iterated Function System

NNTOOL - Neural Network Tool

ANN - Artificial Neural Network

FIF - Fractal Image Format

Page 22: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

1

CHAPTER 1

INTRODUCTION

1.1 INTRODUCTION

Signal processing is a type of Image processing, where the input and

output signals are images. Images can be thought as 2-D signals via a matrix

representation. Earlier, image processing was mostly carried out using analog

devices. Now a day, images are processed in digital domain.

Digital image processing overcomes issues for example the

inflexibility of system to change noise, distortion during processing, and

difficulty of implementation. Image processing is a method that enhances

original images got from camera and sensors in day -to-day life.

A different method has been implemented for image processing

during the past few decades. Image processing systems are becoming popular

because of the availability of powerful personal computers and devices of large

memory, availability of graphics software etc. A wide range of application of

image processing includes the following: i) Remote Sensing ii) Medical Imaging

iii) Forensic Studies iv) Textiles v) Material Science vi) Military etc.

1.2 DATA COMPRESSION

Data compression is a process of compressing the number of data bits

for presenting the data which is in difficult form of sequence, so that storing or

transmitting the data is done in a proficient manner. The data could be an image

or video or an audio, and in the present context. Image compression is a method

of data compression, which encodes unique image with less bits. The goal is to

decrease the size of the storage. While retrieving the original image from the

compressed image, the decompressed image should be similar to the original

image.

Page 23: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

2

1.2.1 Image Compression

The image has become the most important information carrier in

people’s life or the biggest media containing information. As the purpose of

storing and transmitting images continues to increase, the field of image

compression also continues to develop. An image contains a large amount of

data, mostly with redundant information that occupies massive storage space and

minimizes transmission bandwidth. An image consists of pixels, which are

highly correlated to one another within a close proximity. The correlated pixels

lead to redundant data.

Two types of data redundancy are observed as follows

• Spatial Redundancy: The intensities of neighboring pixels are

correlated. So, the intensity information of an image contains

unnecessarily repeated (i.e. redundant) data within one frame.

• Spectral Redundancy: Different frequencies of an image contain

redundant data because of the relationship between various color

planes.

1.2.2 Compression Techniques

This compression technique is classified into two techniques namely,

Lossless and Lossy compression algorithms which are explained below.

1.2.2.1 Lossless Compression

The Lossless compression methods mean receiving the data without

loss. The initial data might be retrieved exactly from the data compressed. It is

used in various fields that can't ensure any variation between the original data.

Lossless compressed image has a larger size compared with lossy one. In a

power constrained applications like wireless communication, lossless

compression is not preferred as it consumes more energy and more time for

Page 24: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

3

image transfer. In the following sections, lossless compression techniques are

discussed.

a) Run length encoding

b) Huffman encoding

c) LZW coding

d) Area coding

a) Run Length Encoding

It is highly basic compressed strategy utilized for continues data. It is

useful in case of redundant information. The representation of run length code

for gray scale images in the sequence is Vi and Ri, where Vi is referred as pixel

and Ri is referred as quantity of successive pixels with the force Vi as appeared

in Table 1.1. It varies the range of 11 pixels coded utilizes five bytes providing a

compression ratio of 11:5 which represent by one byte.

86 86 86 86 86 91 91 91 91 75 75 {86,5} {91,4} {75,2}

Table 1.1 Run Length Encoding

The images which repeat the intensities along their row and column

can be frequently compressed by representing the runs of indistinguishable

intensities where run - length sets, every run-length set indicates to begin another

force. The quantity of back to back pixels has that intensity. This strategy is

utilized for data compression as a part of bitmap image file format. The RLE

(Run-length Encoding) is especially successful when compacting with the binary

images, subsequently to two conceivable intensities with high contrast.

Moreover, a variable - length code can be connected with the run lengths

themselves. The approximate run-length entropy is

1010

LLHHH RL +

+= (1.1)

Page 25: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

4

Where (H0, H1) are the entropies of the high contrast, (L0, L1) are the normal

values of high contrast run length.

b) Huffman Encoding

This technique is utilized for coding symbols based on their

measurable event frequencies probabilities. In this method, the pixels in the

image are considered as symbols. The symbols which occur more often are

assigned as lesser number of bits, while the symbols that happen less frequently

are allotted as relatively more number of bits. It is a prefix code. Nearly every

image coding norms uses lossy practices in the initial stages of compression and

uses Huffman coding as the last step.

c) LZW Coding

LZW (Lempel-Ziv–Welch) is a word reference based coding. It may

be static or dynamic. Static dictionary coding: the word reference is fixed during

the encoding and decoding processes. Dynamic word reference coding: the word

reference is updated on fly. It is extensively used in computer industry and is

implemented as compress command on UNIX.

d) Area Coding

Area coding is an improved type of run length coding, mirroring the

2-Dimensional character of images. This is a critical development over the

alternate lossless techniques. For coding an image, it doesn't make a heavy

impact to interpret it as a successive stream, as it is actually an array of

sequences working up a two dimensional object. The calculation of area coding

is to find the rectangular districts with the same attributes. These areas are coded

in an elucidating form as a component with two focuses and a specific structure.

This sort of coding can be very powerful, yet it bears the issue of a nonlinear

technique, which is hard to execute in equipment. Accordingly, the execution as

far as the compression time is not a constraint.

Page 26: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

5

1.2.2.2 Lossy Compression

Lossy compression includes some loss of data. The information that

has been packed utilizing lossy systems for the most part can't be recouped or

reproduced precisely. It causes in superior compression ratios to the detriment of

mutilation in recreation. The benefit of lossy over lossless is high compression

ratio, less process time and low energy requirements in case of power

constrained applications. In the following sections the lossy compression

techniques are explained.

a) Transformation coding

b) Vector quantization

c) Fractal coding

d) Block Truncation Coding

e) Sub band coding

a) Transformation Coding

In this coding scheme, DFT (Discrete Fourier Transform) and DCT

(Discrete Cosine Transform) are utilized to change the pixels in the original

image into recurrence space coefficients (it is called as transform coefficients).

These set of coefficients have a few alluring properties. One is the energy

compaction property that results in the greater part of the energy of the first

information being packed in just a couple of the noteworthy change coefficients.

This indicates the essential of accomplishing the compression. Just those couple

of critical coefficients is chosen and the remaining is disposed. The chosen

coefficients are considered for further quantization and entropy encoding. DCT

coding has been the most well-known way to deal with transform coding.

b) Vector Quantization

The fundamental thought in this system is to build up a dictionary of

fixed size vectors, called code vectors. A vector is typically a piece of pixel

Page 27: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

6

qualities. A given image is then partitioned into non-overlapping blocks

(vectors) called image vectors. At that point, every vector is resolved and its file

in the word reference is utilized as the encoding of the original image vector.

c) Fractal Coding

The crucial thought here is to decompose the image into portions by

utilizing standard image handling systems, for example, shading partition, edge

discovery, and range and surface analysis. The library really contains codes

called Iterated Function System (IFS) codes, which are the conservative

arrangements of numbers. Utilizing an orderly method, an arrangement of codes

for a given image is resolved, such that when the IFS codes are connected in an

appropriate manner of image, squares yield a picture that is in nearby estimate of

the first. This idea is very viable for packing images that have great normality

and self-similitude.

d) Block truncation coding

In this method, the image is separated into non -overlapping blocks of

pixels. For every square, limit and remaking qualities are resolved. The limit is

normally the mean of the pixel values in the block. Then a bitmap of the piece is

inferred by replacing all pixels whose qualities are more prominent than or break

even with (not exactly) at the edge of a 1 (0). Then for every segment (gathering

of 1s and 0s) in the bitmap, the remaking quality is resolved. This is the normal

estimations of the comparing pixels in the original block.

e) Sub band coding

In this method, the image is examined to deliver the parts containing

frequencies in very much characterized groups, called sub groups. Hence,

quantization and coding are applied to each of the groups. The advantage of this

plan is that the quantization and coding reasonable for every sub band can be

planned independently. Compression strategies can be applied specifically to the

Page 28: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

7

images or to the changed image data (changed space). The transform coding

methods are appropriate for image compression. Here, the image is decayed or

changed into segments that are then coded by individual attributes. The change

ought to have high-vitality compaction property, in order to accomplish high

compression proportions. Cases: Discrete Cosine Transform (DCT), Wavelet

Transform, Multi wavelet Transform etc.

1.3 IMAGING TECHNIQUES

Medical image compression techniques form the basis for common

imaging modalities such as CT (Computed Tomography), MRI (Magnetic

Resonance Imaging) and PET (Positron Emission Tomography), and they are

useful in the fields of medicine, biology, earth science, archaeology, materials

science and nondestructive testing. On the other hand, anatomical and

morphological imaging techniques like X-ray, CT and MRI, which are widely

used in clinical offer high anatomical resolution, but are not capable of imaging

metabolic activity.

1.3.1 Computer Tomography (CT)

A Computer Tomography (CT) utilizes a computer System that takes

data from several x-ray images of structures inside a patient’s body and converts

them into images on a monitor. Tomography is the procedure of generating a 2D

image slice or section through a 3D image. A CT Scanner uses digital geometric

processing to create a 3D image of the inside of an object. CT Scanner radiate a

series of narrow beams through the patient body as it moves through an arch, not

like x-ray machine which sends just one radiation beam. The final picture is

more information’s than an X-Ray Image.

1.3.2 Magnetic Resonance Imaging (MRI)

The basic idea of MRI is to study the response of the magnetized

tissue to Radio Frequency (RF) signals and deduce the underlying properties of

Page 29: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

8

the tissue. An MRI system consists mainly of three hardware components. The

main magnet produces a high magnetic field Magnet coil (B0) which is used to

magnetize the tissue. The higher the magnetic field is the higher the SNR, which

can be potentially achieved with the scanner. It is essential to be homogeneous

over the imaging volume in order to avoid the distortions in the acquisition;

additional shim coils are used to guarantee the homogeneity even after the

introduction of the patient in the bore. Other than some open MRI scanners using

permanent magnets, the most clinical scanners use a cylindrical superconducting

magnet consisting of a solenoid of wire (typically niobium-titanium), which

operates within liquid helium at 4 Kelvin, in order to have superconducting

properties and not offer resistance to the current. Therefore, the magnetic field

always stays on even when the scanner is not being operated. Modern clinical

MRI scanners have a main magnet producing the field strength of typically 1.5

or 3 Tesla, although preclinical and research scanners can use 7 Tesla. Earth's

magnetic field strength is about 0.00005 Tesla.

Figure 1.1 Schematic view of an MRI scanner

In Figure 1.1 the main magnet produces a strong homogeneous

magnetic field (a) the gradient coils are responsible for the spatial localization of

the signal (b) RF transmission coil (c) the signal response from the excited spins

within the patient being measured by local surface coils placed on the imaging

volume.

Page 30: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

9

1.3.3 Positron Emission Tomography (PET)

The Positron Emission Tomography (PET) is an indicative imaging

instrument that gives images of radioactive substances infused into the perished

to delineate characteristic capacities. The radioactive center transmits a positron

which demolishes the electron to produce two 511 keV photons meandering in

pretty much inverse headings to be distinguished unexpectedly by two

indicators. Numerous photons are intrigued or sprinkled, dropping the quantity

of identified emanation occasions. PET images can be utilized specifically or

after dynamic demonstrating to pull out quantitative estimations of a favored

physiological, biochemical or pharmacological element. Since, such depictions

are typically loud, it is imperative to see how clamor influences the subsequent

quantitative standards. A pre-essential for this kind is that the properties of

clamor: that are known in variety (size) and quality (relationship).

Investigational PET information is obtained in Two Dimensional (2D) and Three

Dimensional (3D) ownership mode and reproduced by scientific Filtered Back

Projection (FBP) and factual Maximum a Posteriori (MAP) approach, with

delicate figuring systems like Artificial Neural Network with Back Propagation

Network (ANN-BPN), Support Vector Machine (SVM) and Artificial Neural

Network with Radial Basis Function (ANN-RBF).

One of the major dissimilarities between PET scans and other imaging

tests like CT scan or MRI is the PET scan reveals the cellular level metabolic

changes occurring in an organ or tissue. This is significant and exclusive because

disease processes very often begin with handy vicissitudes at the cellular level.

A PET scan can very often detect these initial changes. Whereas, a CT or MRI

detects vicissitudes a little later as the disease starts to cause vicissitudes in the

structure of organs or tissues.

Compared to CT images, PET images show a lower anatomical

resolution, which can affect the correct localization of lesions and demarcation

Page 31: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

10

of their borders. Moreover, the contrast in CT imaging is based on the density of

the tissue, while contrast in PET imaging results from metabolical activity in the

tissue. However, such combination requires a decent alignment of the PET and

CT image data. In image processing, such alignment is called registration. Image

data is organized in a discrete coordinate system containing numerous elements.

In two-dimensional (2D) images these elements are called pixels. When acquired

with standalone scanners, the coordinate systems of PET and CT image data will

most certainly differ from each other. The reason for the difference can be

manifold but the highest influence derives from the change of patient position

between the scans. It is practically almost impossible for a patient to maintain

the exact place when transferred from one scanner to the other. To cope up with

this requirement, the combination of PET and CT scanners into a single device

has been realized in the early 2000s. In contrast to the PET and CT scanners,

such combined scanners represent an effective approach for the acquisition of

accurately registered images and are able to contribute to the reduction of overall

scan time up to 40%. The reason for this significant improvement is based on the

fact that the scans are acquired subsequently, thus minimizing the possibility of

movement. Moreover, the patient is moved on an automatic bed from one

scanner to the other describing a linear. One-dimensional translation can be

registered very fast and accurate by software algorithms.

1.4 IMAGE COMPRESSION PERFORMANCE METRICS

The performance of a compression technique can be assessed in a

number of ways the amount of compression, the comparative difficulty of the

technique, memory constraint for implementation, time required for the

compression on a machine, and the distortion rate in the reconstructed image.

The following are the performance metrics to evaluate the compression

techniques.

Page 32: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

11

1. Image Quality 2. Compression ratio 3. Speed of compression

• Computational complexity • Memory resources.

4. Power consumption

1.4.1 Image quality

There is a need for specifying methods to judge image quality after the

reconstruction process and to measure the amount of distortion due to

compression process, as minimal image distortion means better quality. There

are two types of image quality measures, subjective quality measurement and

objective quality measurements. Subjective quality measurement is established

by asking human observers to judge and report the image or video quality

according to their experience; and these measures would be relative or absolute.

Absolute measures classify image quality not regarding to any other image but

according to some criteria of television allocations study organization. On the

other hand, relative measures compare image against another and choose the best

one. The quantitative measurements are discussed in the following.

1.4.1.1 Distortion

The variation between the original and reconstructed image is called

as distortion. It is denoted using Mean Square Error (MSE).

MSE= ( ) ( )[ ]21-n

0i

1n

0jji,K ji,I

nn1

∑=

∑−

=−

× (1.2)

Page 33: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

12

n×n (n-rows, n-columns) represents noise free original image I (input image) and

noisy approximation K (output image)

1.4.1.2 Fidelity or Quality

It defines the similarity between the original and reconstructed

images. It can be measured using Peak Signal to Noise Ratio (PSNR) in dB.

PSNR =

MSE

2IMAX

log 10 (1.3)

Here MAXI represents the maximum possible pixel value of the image. Pixels

are represented as eight bits per samples. Logically, a greater value of PSNR is

good because it indicates that the ratio of Signal to Noise is greater. Here, the

'signal' is the genuine image and the 'noise' is the error due to reconstruction.

1.4.1.3 Compression Ratio (CR)

It is the ratio of the number of bits required to represent the image

prior to the compression and to the number of bits required to represent the

image after compression.

Compression Ratio (CR) =

size file Compressed

size file edUncompress (1.4)

Where, CR can be used to judge how the compression efficiency is. The lower

CR means better compression.

1.4.1.4 Bits per pixel (Bpp)

It is the average number of bits required to represent a single sample.

It is represented in terms of Bits per pixel (Bpp).

Page 34: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

13

Bits per pixel (Bpp) =

N x N

bytes ofNumber x 8pixels ofNumber bits ofNumber (1.5)

Where N-stands for rows, N- stands for columns.

1.4.1.5 Speed of Compression

Compression speed depends on the compression technique that has

been used, as well as, the nature of platform that hosts the compression process.

Compression speed is influenced by computational complexity and size of

memory. Lossy compression is a complex process that increases system

complexity, storage space and needs more computational element clock.

1.5 THE COMPRESSION SYSTEM

The compression system model consists of two parts

Compression

De-compression

Figure 1.2 General block diagram of Compression Technique

Figure 1.3 General block diagram of De-compression Technique

The compressor shown in Figure 1.2 consists of a preprocessing stage

that performs data diminution (reduction) and mapping. The encoding stage

performs quantization and coding, whereas, the de-compression consists of a

decoding stage that performs decoding and inverse mapping followed by a post-

Page 35: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

14

processing stage, as shown in Figure 1.3. In compression, prior to encoding

process, preprocessing is accomplished to prepare the image for the encoding

process and consists of many operations that are application specific. Post-

processing can be accomplished to remove some of the potentially unwanted

artifacts brought about by the compression process, after the completion of

compressed file has been decoded.

The compression process can be divided into following stages:

• Image Data reduction: Image data can be reduced by gray level and

spatial quantization, and can undergo any desired image improvement

(for example, noise removal) process.

• Mapping: Involves mapping the original image data into one more

mathematical space, wherever it is easier to compress the data.

• Quantization: Involves taking potentially continuous data from the

mapping stage and putting it in discrete form.

• Coding: Involves mapping the quantized data (discrete) onto a code in

an optimal manner.

The mapping procedure is significant because the image data are

highly linked. If the value of one pixel is known, it is likely that the adjacent

pixel value is identical. On finding a mapping equation that de-correlates the

data, such type of data redundancy can be detached.

• Differential coding: Method of reducing the data redundancy is done

by finding the difference between the adjacent pixels and encoding

those values.

• Principal components transform: This provides a theoretically

optimal decorrelation.

Page 36: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

15

As the spectral domain can also be used for image compression, the

first stage may include mapping into the frequency or sequence domain, where

the energy in the image is compressed mainly into lower frequency components.

• Quantization may be essential to convert the data into digital form (bit

data type), depending on the mapping equation used. This is because

many of these mapping methods will result in floating point data,

which require multiple bytes for representation and it is not very

efficient as far as the goal is to reduce the data.

Decompression process can be divided into the following stages:

• Decoding: Takes the compressed file and drives back the original

coding by mapping the codes to the original quantized values.

• Inverse mapping: Involves reversing the original mapping process.

• Post-processing: Involves enhancing the structure of the final image.

De-compression might be done to drive back any preprocessing, for

example, enlarging an image that was shrunk in the data reduction

process. In other cases, the post-processing possibly will be used

simply to enrich the image and to improve any excavation from the

compression process itself. The development of a compression

algorithm is highly application specific. The preprocessing stage of

compression consists of processes such as enhancement, noise

removal or quantization. The goal of preprocessing is to prepare the

image for the encoding process by rejecting any inappropriate

information. For example, many images that are used only for the

viewing purposes can be preprocessed by eliminating the lower bit

planes, without losing any useful information.

Page 37: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

16

1.6 OVERVIEW OF METHODOLOGY

The methodology for the compression process which takes the image

of (NxN) size as input is shown below.

Figure 1.4 The compression process on forward transform

The Figure 1.4 represents a compression process flow for an input

image. The compression process pre-analyzes rows and columns and performs

encoding techniques like magnitude set and bit plane coding followed by run

length encoding. The sign data of the coefficients are coded as bit plane with

zero thresholds. This bit plane may be used as it is coded to scale back the Bits

per pixel (Bpp). The coefficients are coded by means of run length coding and

magnitude set coding techniques, which in turn result in low bits.

1.7 RESEARCH MOTIVATION

Enormous quantities of data are involved in the process of storage

and/or transmission of images, videos, sound and text in several applications.

Page 38: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

17

The application areas are medical imaging, Tele radiology, Satellite/Space

imaging, Multimedia digital video (entertainment, home use) and digital

photography. However, compression becomes very essential in medical

imaging.

MRI is a noninvasive method for producing 3D tomography images of

the human body. It is frequently used for the detection of tumors, grazes and

other irregularities in soft tissues, such as the brain. Clinically, radiologists

qualitatively analyze the brain surface produced by MRI scanners.

Recently, computer-aided techniques for analyzing and visualizing

MR images have been examined. Many researchers have focused on detecting

and quantifying irregularities in the brain. Automatically recognizing the

pathologies in MR images of the head is a vital step in this process. One more

important step in computer-aided analysis is the data quality assurance. MR

images comprise unwanted intensity variations due to imperfections in MRI

scanners. Reducing these dissimilarities can improve the accuracy of automated

analysis.

A single clinical MRI scan occupies numerous megabytes of disk

space. Effective image compression systems are significant for storing

multitudes of scans. By means of tele-radiology, wherever MRI scans are

transmitted by wire to remote sites for assessment by specialists, MR image

compression plays a massive role in rising transmission speeds.

In the MRI scans of the head, doctors are typically more interested in

the brain as opposed to the region outside the brain. Aimed at this reason,

Anderson has developed a lossy MRI compression scheme that selectively

compresses the region outside the brain at a higher compression ratio than the

brain. Thus, high compression ratio has been achieved while upholding image

quality of the brain area. Obviously, automatic intracranial boundary detection is

a prerequisite for such a scheme.

Page 39: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

18

1.8 PROBLEM STATEMENT

1. Medical image data like CT, MRI and PET consume maximum

storage and use maximum bandwidth for transmission that frequently

results in degradation of image quality.

2. Medical image compression considering lossy and lossless types, the

medical images to be compressed efficiently with optimal

compression ratio.

3. Image compression is the foremost component of communication and

storage systems where the uncompressed images need considerable

compression technique, which should be competent of reducing the

crippling disadvantages of data transmission and image storage.

4. The compression is being performed on the whole image without

considering the region of diagnostic importance.

5. Existing compression technique doesn’t guarantee the substantial

noticeable quality of an image with optimum bit rate.

6. Supportability of existing compression techniques on telemedicine

remains undiscovered.

1.9 OBJECTIVES OF THE RESEARCH

The main objectives of this research work are

• To improve the compression ratio for medical image's using DCT,

SPIHT and Fractal algorithm and to analyze their performance.

Page 40: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

19

• To present an improved image compression algorithm for medical

images using DWT, BPNN and NNRBF.

• To analyze different modality images using Hybrid technique namely

DWT-BPNN.

• To develop Hybrid approach using Fractal and Neural Network Radial

Basis Function for efficient compression of multi modal medical

images.

1.10 CONTRIBUTIONS OF THE THESIS

In this thesis, medical image compression has been carried out using

various methods. The compression algorithms such as DCT, DWT, SPIHT,

Fractal, LZW, Neural NNBP, NNRBF, Hybrid DWT-NNBP and Hybrid

approach for FNNRBF have been applied to multi modal medical images (PET,

MRI and CT). Quality parameters such as CR, PSNR, Execution time and

Memory usage are considered for performance analysis. It is observed that

FNNRBF method has low CR and high PSNR values. Hybrid Fractal and

NNRBF is found to be more efficient than Fractal and FNNRBF methods.

1.11 ORGANIZATION OF THE THESIS

Chapter 1 : This chapter discusses the basics of PET images and image

compression techniques. The problem statement, objectives, contribution and the

scope of research are presented in this chapter. The organization of thesis is also

presented in this chapter.

Chapter 2: This chapter describes literature review of the existing techniques

such as Huffman, LZW, DCT, DWT, SPIHT and Fractal based compression

techniques, NNBP and NNRBF algorithms for PET, MRI and CT image

compression.

Page 41: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

20

Chapter 3: In this chapter, a brief discussion on the methods such as Huffman,

LZW, DCT, DWT, SPIHT, Fractal, NNBP and NNRBF based compression

techniques algorithms are described.

Chapter 4: This chapter discusses the various compression techniques like

DCT, Fractal Compression and SPIHT applied to numerous medical images.

Experimental results show that the outlined DCT approach achieves the better

CR, Bpp and PSNR with less MSE on comparison with SPIHT and Fractal

methodology.

Chapter 5: This chapter describes the different compression methods such as

Fractal, NNBP and NNRBF applied to various medical images such as MR, CT

and PET. Experimental results show that the NNRBF technique achieves a low

CR and higher PSNR, with less MSE on MR, CT and PET images, when

compared to Fractal and NNBP techniques.

Chapter 6: This chapter compares a few promising compression techniques

such as DWT algorithm, NNBP and new hybrid techniques for compression.

DWT improves the quality of compressed image. Back-propagation algorithm

can be extensively used as a learning algorithm in ANN. BPNN comes under

Feed-Forward Neural Network Architecture. Error correction learning rule is

particularly used in Neural Network (NN). This is a very efficient algorithm for

image compression, which works with the architecture of ANN. Then, the

performance analysis of different images is carried out (on application of three

different algorithms). The results clearly explain that hybrid image compression

using hybrid DWT-BP(Discrete Wavelet Transform-Back Propagation) provides

better CR and PSNR.

Chapter 7: This chapter explains the design methodology of Fractal based

Neural Network Radial Basis Function (FNNRBF) for image compression.

Generally, to store the digital images, there is a need of large amount of data and

it consumes more time for transmission and storage. So, the image compression

Page 42: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

21

technique of this chapter is used to overcome the storage and transmission costs.

The implementation of this technique shows the effectiveness in terms of

compression of the medical images. Also, a comparative analysis is performed to

state that the proposed system is effective in terms of CR, PSNR, memory space

and execution time.

Page 43: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

22

CHAPTER 2

LITERATURE SURVEY

2.1 INTRODUCTION

Compression means making the file size smaller by rearranging the

information in the file. Compressing imagery is different from zipping files.

Imagery compression changes the system and content of the information within

a file. Image compressions may used to rearrange the data and decay it to

achieve desired compression level, depending on the compression ratio. The

sacrifice of the data may or may not be noticeable. The quantity of image

compression can be influenced by the type of imagery. Higher compression ratio

can be achieved in portions of the image that have similar tone, such as water

area that have the same shade. Image compression is one of the brightest

disciplines in image processing. Images acquired need to be stored or transmitted

over long distances. Untreated image occupies more memory and hence need to

be compressed. Due to the demand for high quality video on mobile platforms

there is a need to compress untreated images and reproduce the images without

any degradation. Lossy compression is the reverse of lossless image

compression. It is used to compress images and video files. Lossless

compression is a family of data compression that permits the genuine data to be

perfectly reconstructed from the compressed data. Superfluous information is

removed in compression and added during decompression. Almost every lossless

compression programs do two things in sequence: the first step generates a

statistical framework for the input information and the second step uses this

framework to map input information to bit sequences in such a way that

"probable" (e.g. frequently encountered) information will make shorter output

than "improbable" information.

Page 44: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

23

2.1.1 Description of the Image Compression block diagram

An image consists of large data and requires more space in the

memory. If more number of data is required for transmission then it takes much

time to deliver the data to the receiver. Thus by using image compression

techniques the time consumption can be greatly reduced. In this method, the

elimination of redundant data in an image can be possible. The image which is

compressed occupies less memory space and less time to transmit in the form of

information from transmitter to receiver. Compression means to make file size

smaller by reorganizing the data in the file. Data that is duplicated or has no

value is saved in shorter format or eliminated, greatly reducing the file size.

Compressing imagery is different than zipping files. Image compression

reorganizes the data and may degrade it to achieve desired compression level,

depending on the compression ratio. If there is better compression ratio, the

smaller the file size here more data is packed into smaller space, but lower the

quality of the compressed product.

Figure 2.1 Block Diagram of Image Compression

This figure explains the block diagram of Image Compression. First

we need transform the input Image using Forward Transform and again us

quantizing the input Image using Quantization and then us following the Entropy

encoding and finally we are getting the Compressed Image. Now we can store or

transmit the compressed Image. These are the steps followed in Compression

Techniques. Here, the Image Compression Techniques is divided into two types

namely Lossy and Lossless Techniques. In lossy, some information is lost during

compression of Image whereas in lossless compression no information is lost

Input Image

Forward Transform

Quantization

Entropy Encoding

Compressed Image

Page 45: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

24

during Image Compression. Discrete Wavelet Transform (DWT) and Fractal

algorithm comes under Lossy Compression. Huffman Coding comes under

Lossless Image Compression.

2.1.2 Image format

In general, JPEG format of CT, MRI, and PET medical images are

used to analysis the quality parameters during image compression.

In JPEG format, the degree of compression can be adjusted, allowing

a selectable trade-off between storage size and image quality. This image format

is best for shooting of snap shot with realistic scenes. JPEG is a regularly used

method of lossy compression for digital images than Bitmap Image File (BMP)

and Tagged Image File Format (TIFF) formats.

2.2 LITERATURE REVIEW

It is of utmost importance to discuss about the basics of multi modal

medical image compression, so that the research community could have a better

idea about the processing of CT, MRI and PET image compression.

Jian-Jiun Ding et al [36] have presented a variable length coding

named as Huffman code which is mostly used to increase coding efficiency. It

widely uses the Huffman source-coding algorithm in order to generate the

uniquely decodable Huffman code with a minimum expected codeword length

when the probability distribution of a data source is known to the encoder.

Pawel Turcza et al [55] have proposed image compression using

Huffman coding which is based on an integer version of a Discrete Cosine

Transform and a low complexity entropy encoder making use of an adaptive

Golomb–Rice algorithm, which can be efficiently used in Huffman tables.

Page 46: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

25

Ankita Vaish et al [13] have used PCA and Huffman coding. Set of

principal components (PCs) are used for reconstruction. Ill effects of using

number of PCs for reconstruction are overcome by further quantization. Coding

redundancy is removed by Huffman coding.

Arif Sameh Arif et al [15] have introduced a new framework based on

grouping of images, the correlation of pixels. Combination of Run-length coding

and Huffman coding are used. Significant improvement in compression is

achieved.

Jagadish et al [33] have explained that the objective of image

compression technique is to reduce the amount of data required for representing

sampled digital images. It is concluded that Huffman coding is the most efficient

technique for image compression and decompression.

Xiaofeng Li et al [80] have formulated two different stages in lossless

compression scheme related to the medical image compression. At first, current

pixel is predicted from the least-square-based prediction coefficients. Secondly,

residual image is formed by Huffman coding.

Mohamed Abo-Zahhad et al [7] have proposed image compression

(DPCM, DWT and the Huffman) approach. If the first image is pre-processed by

DPCM, its output will undergo wavelet transformation. Resulting coefficients

are encoded using Huffman coding.

Tajallipour et al [72] have presented an efficient adaptive LZW data

compression algorithm. Encoding considers customized library and also

considers custom valued threshold. The library size as well as threshold

parameters are adjustable.

Ng et al [50] have proposed an effective data re-ordering methods, the

SBI technique, that takes care of pre-processing stage in LZW algorithms. With

Page 47: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

26

SBI, the dictionary matches grow dramatically and that lead to improved

performance.

Chiang et al [24] have developed an adaptive lossy LZW algorithm

that employs an adaptive mechanism for threshold.

Patil et al [54] have suggested automated multiclass diagnosis of

Dementia, related to the category of MR images (MRI) like human brain. 1D

histogram derived from 2D MR images of types like brain image is compressed

using DCT. A set of DCT coefficients was considered as features for

classification by ANN. This feature helps in identifying a person in distress

either by Huntington or Mild Alzheimer or Alzheimer disease. Classification

rate of 100% is obtained.

Christophe et al [12] have presented a scheme that involves DCT,

Kohonen map based vector quantization, first-order predictor based differential

coding and entropic coding.

Debin Zhao Wen Gao et al [28] have proposed block-based DCT that

enhances DCT applications to compression, retrieval and pattern recognition of

images. Morphological representation of DCT coefficients (MRDCT) is utilized.

Ci Wang et al [14] have suggested a new DCT-based MPEG-2

Moving Picture Experts Group transparent scrambling algorithm using INTRA

blocks. The computation burden is very low and effects can be easily controlled

by the operator. The algorithm has little influence on output bit rate.

Jie Liang et al [37] have proposed a structure for Linear-Phase

Praunitary Filter Banks (LPPUFB) in time as well as frequency domain post-

processing of the DCT. This structure enables the design of DCT based LPPUFB

with partial-block overlapping and variable-length filters. A DCT-oriented

initialization method is developed for improved convergence

Page 48: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

27

Renato et al [59] have introduced an approximate transform of the

matrix Rows which is constructed by utilization of diverse mathematical

structure for design hybrid algorithms.

Merav Huber-Lerner et al [46] have identified Hyper Spectral (HS),

the image sensor types to compute the repentance of each pixel and created a 3-

D representation of the recorded scene. But the HSI takes large storage space

and enormous transmission time. PCA-DCT (means principle component

analysis which is followed by the discrete cosine transform) system mingles the

PCA’s ability to remove the unwanted background from the minute quantity of

components.

Yung-Gi Wu et al [84] have proposed an adaptive sampling algorithm

in which significant coefficients are calculated by the difference between exact

points and projected points. Recording or transmitting the important coefficients

only attain the target of compression. In decoder part, a linear equation is

engaged to reconstruct the coefficients.

Yung-Gi Wu et al [85] have presented a strategy that regards the DCT

in the form of band pass type of filter to decompose a sub block into a number of

equally sized bands. The high similar property among bands is found using

similar property, the bit rate of compression is greatly reduced.

Vasanthi Kumari et al [76] have presented an image compression

system by means of graph cut and also utilization of wavelet transform. Initially,

block partition procedure is carried out, and then dissimilar blocks are selected

by applying graph cut algorithm. The Differential Pulse Code Modulation

(DPCM) is utilized for raising the compressibility. Finally, the transformed

image is given to Huffman-encoder.

Alagendran et al [5] have investigated the several types of medical

image compression techniques.

Page 49: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

28

Bhammar et al [19] have reviewed several ways of image

compression. The transmission (with less time consuming) of high quality digital

images necessitates the compression-decompression technique, which is to be

simple, high degree of quality image and completely lossless.

Reny Catherin et al [60] have surveyed the various lossy image

compression techniques. From their survey, the crucial inferences are derived:

DWT, SPIHT and DCT provide higher compression ratio and worthy output

images. Though the performance is disturbed by of noise, Soft computing based

compression technique works well in vigorous environment and offers high CR

and PSNR. NN procedure gives better-quality of reconstructed images as it

rejects blocking effects accompanying with DCT.

Sridevi et al [69] have outlined the contrast of compression methods

for instance JPEG2000 Max-Shift ROI Coding, and other related standards like

JPEG2000 Scaling Based ROI Coding, DCT, Shape Adaptive Wavelet

Transform and Scaling Based ROI, DWT and Sub band block Hierarchical

Partitioning. These concepts are evaluated with the utilization of CR and

compression quality.

Vaclav Simek et al [75] have illustrated the particulars around the

acceleration of 2D wavelet related to the medical image compression using

MATLAB and the Compute Unified Device Architecture (CUDA). Acceleration

of processing flow exploits all the immense parallel computational power

obtainable by the modern NVIDIA GPU. A computing system is programmed

by means of C language with the CUDA. similarly, a number of good-looking

features are exploited for a wide class related to rigorous data parallel

computation tasks.

Praisline Jasmi et al [56] have Observed the similarity of image

compression techniques with different coding like Huffman coding, DWT, the

Fractal coding. Compressed images are obtained by utilizing these three

Page 50: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

29

algorithms on different types of input image. Efficiency evaluation is made

between compressed images by using the various parameters among these

compression techniques.

Jaffar Iqbal Barbhuiya et al [32] have made a comparative study upon

image compression using DCT and DWT. DWT algorithm does better than DCT

algorithms in terms of Compression, MSE and PNSR.

Kai-jen Cheng et al [40] have proposed Binary Embedded Zero tree

Wavelet algorithm which is related to the newly define two tree structures. In

BEZW algorithm the bit streams are arranged in order based on their

importance, so that the reconstruction fidelity be subject to the set of recovered

bit planes. In 3-D-BEZW algorithm, which is competitive, the prevailing pass

attains the quantization and significance test.

Priya Pareek et al [58] have recommended RLE as an suitable method

for compressing any form of data irrespective of the information contented.

However alternative RLE schemes can encode data along the length of the

bitmap, and the columns are to encode the bitmap in the form of 2-D tiles and

also to encode pixels in the diagonal form are made in to a zigzag form.

Kuppusamy et al [43] have suggested that Fractal image compression

is a quite useful technique, which can be mainly used in the compression of

medical image as well as color image. Fractal compression comes under the

lossy compression method for digital images. The main idea is to decompose the

image into segments by using standard image processing techniques such as

color separation, edge detection, and spectrum and texture analysis.

Sophin Seeli et al [68] have affirmed that Fractal image compression

is based on fractals of various images. The two main advantages of changing the

images to fractal data are 1) the memory size of the compressed image is much

lower than the memory size of the original image, 2) Fractal image Compression

Page 51: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

30

can be used to measure the parameters like Compression ratio (CR), Peak Signal

to Noise Ratio (PSNR), Bits per pixel (Bpp) and also many other parameters.

Taha mohammed Hasan et al [71] have designed an Adaptive Fractal

Image Compression (AFIC) algorithm to reduce the long processing time of the

Fractal Image Compression (FIC). AFIC can be used to speed up the encoding

process and achieve a higher compression ratio, with a slight diminution in the

quality of the reconstructed image. In comparison with some methods, AFIC

spends much less encoding time and offers higher compression ratio of the

quality of the reconstructed images.

Al-Fahoum et al [7] have implemented a combination of Fractal and

Wavelet transform that can be used in image compression. These methods can

be mainly used in X-Ray Angiogram. First, the image is decomposed using

Wavelet transform. The smoothness of the low frequency part of the image

appears as an approximation image with higher self-similarities. Therefore, it is

coded using a Fractal coding technique.

Anil Bhagat et al [11] have found that Fractal image compression can

take advantage of redundancy in scale, but its operating principles are very

different from other transform coders. Images are not stored as a set of quantized

transform coefficients, but instead as fixed points of maps on the plane. Just as

the fern has details at every scale, so the decoded image has no natural size and

it can be decoded at any size.

Jianji Wang et al [35] have introduced FIC scheme created on the

concept of affine comparison among two blocks in FIC remains alike the Total

value of Pearson’s Correlation Co-efficient (APCC). Every block is categorized

by means of an APCC-based block classification process. Secondly domain

blocks are sorted based on APCCs.

Page 52: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

31

Jyh-Horng Jeng et al [39] have proposed a Huber Fractal Image

Compression (HFIC), in which the linear type of Huber regression system is

used with the robust data entrenched in the form of encoding technique of

fractal. Once the raw image is modified by means of unwanted noise, FIC

scheme should be insensitive. Due to HFIC the computational cost is increased,

we move to Particle Swarm Optimization (PSO) technique that reduces the time

of search.

Chaudhari et al [23] have suggested a wavelet transform related fast

fractal image coding. Fast Fourier Transform (FFT) approach related to fractal

image coding by flexible quad tree partition is applied here. The resemblances

present among wavelet sub tree are utilized for predicting coefficients of better

scale with the coarser scale by means of affine transformation.

Sridharan Bhavani et al [70] have discussed the concepts related to

Fractal based coding processes like standard Fractal coding, and other

algorithms as quasi lossless Fractal coding in addition to improved quasi lossless

fractal coding. The process of machine learning correlated model is utilized for

plummeting the encoding time.

Omar Arif et al [52] have elaborated the use of Mercer kernel methods

in statistical learning theory which provides strong learning capabilities, as seen

in Kernel Principal Component Analysis (KPCA) and Support Vector Machines

(SVM). The technique takes advantage of the universal approximation

characteristics of generalized radial basis function neural networks to

approximate the empirical kernel map associated to KPCA or SVM.

Long Zhang et al [45] have introduced a discrete continuous

procedure used for the edifice of a RBF prototype. Initially orthogonal least

squares (OLS) build forward stepwise selection. It is followed by Levenberg–

Marquardt (LM) related parameter optimization to speed up convergence,

connection amongst the hidden nodes too output weights.

Page 53: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

32

Arunpriya et al [17] have proposed an approach consisting of three

stages from preprocessing, and moving to feature extraction and finally the

classification to process the image. The tea leaf shape images can be recognized

precisely in the stage of preprocessing with the help of fuzzy de-noising by

means of dual tree discrete wavelet transform and the digital morphological

feature concept is used to increase classification accuracy. Further, Radial Basis

Function (RBF) is used for efficient classification.

Tiruvenkadam Santhanam et al [74] have developed a Neural Network

for weather forecasting on various issues gained from weather-related

professionals. He evaluates the concept of Radial Basis Function (RBF) with

other neural network concept Back Propagation (BPN) are used to test the

effectiveness of different forecast techniques. His results on this subject prove

that radial basis function outperforms the back propagation.

Arun Vikas Singh et al [16] have combined the concepts of both

wavelets transform and Radial Basis Function Neural Network laterally with

vector quantization. In this approach the image is disintegrated into a set of sub

bands states diverse coding and quantization practices are utilized.

Alex Alexandridis et al [6] have presented a novel approach designed

for training Radial Basis Function (RBF) networks, which increases the accuracy

and parsimony of the projected system. This procedure is based on a non-

symmetric variant of the Fuzzy Means (FM) process.

Panda et al [53] have perceived a neural network based Image

Compression with Back Propagation algorithm which is widely used in all the

areas to compress the image.

Abdul Khader Jilani Saudagar et al [1] have utilized Medical Image

Compression (MIC), which is a basic but important factor in telemedicine. It is

required to have an algorithm for compression of medical imaging modalities

Page 54: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

33

like CT, MRI and ultrasound for providing medical care’s to patients in remote

locations. In telemedicine, the broadcasting of medical images requires a high

data rate so as to obtain a good quality transmission.

Birendra Kumar Patel et al [20] have developed image compression

technique based on back propagation neural network with Levenberg-marquardt

algorithm. The training algorithm and back propagation neural network are used

to improve the performance and to reduce the convergence time and to provide

high compression ratio as well as low distortion.

Anna Durai et al [14] have recommended Back Propagation

Algorithm. The efficiency can be decreased if the input image contain a quantity

of dissimilar gray levels with narrow modification amongst neighborhood pixels,

it takes enormous time to converge the image.

Prema Karthikeyan et al [57] have performed image compression

using Back- propagation algorithm in multi-layer neural network. The network

with three layers, input, hidden and output is used. Both the input and output

layers have the same number of neurons. The input and output are connected to

each network the compression can be done with the value of the neurons at the

hidden layer.

Shiqiang Yan et al [66] have selected Neural Network by means of

chaotic neuron. A Chebyshev chaotic charting is utilized to build the Neural

network.

Vilas et al [79] have preferred an ANN with feed forward back

propagation system designed for image compression. The Bipolar Coding

System in addition to LM algorithm helps to obtain a satisfactory result.

Dipta Pratim Dutta et al [29] have utilized ANN that adapts the

psycho visual features, the concept is mostly dependent on the information

Page 55: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

34

confined in images. The algorithms preserve utmost of the appearances of the

data in a lossy manner, further maximize the compression performances.

Vilas Gaidhane et al [78] have used Feed Forward BPNN method

along with PCA technique trained by considering the different number of

hidden neurons.

Anjana Jianyu Lin et al [54] have evaluated the performance of the

Two-band analysis–synthesis filters used mainly meant for compressing raw

images. Both the filter concepts (FIR and IIR filters) have been deliberated.

Hybrid FIR–IIR analysis–synthesis filters is located to make best use of

compression performance.

Chakrapani et al [21] have presented an inimitable iterated function

system (IFS) entailing of the assembly with the affine transformation. FIC hires

a distinct nature of IFS called as PIFS (also called as local IFS). Collage

Theorem is hired for PIFS and gray scale images, which is equal to IFS for

binary images.

Alok kumar singh et al [9] that most popular to scale back the

redundancy and irrelevant within the image, for storing and transferring the

image with efficiency.

Chander mukhi et al [22] have utilized encoders that used the DCT to

perform transform coding. The DCT maps time domain signals to frequency

domain. It compresses the frequency domain spectrum by truncating low

intensity regions. The (Discrete Wavelet Transform) DWT which offers a more

robust solution in essence, may be computed by using a collection of digital

filters at a quicker rate by analyzing the complete signal. The DWT captures a

lot of info than the DCT and produces higher results. The DWT separates the

images with high frequency elements from the remaining elements of the images

Page 56: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

35

and resizes the remaining components and rearranges them to make a new

transformed image.

Jiaji Wu et al [34] have proposed lossless compression algorithmic

rule containing two stages to support the weighted motion compensation and the

context based modeling. The algorithmic rule makes use of the weighted motion

compensation for getting the motion vector supported bizarre motion in aurora

images. Afterward, the context based modeling is pooled with the motion vector

and the results obtained provide ascendancy of algorithmic rule.

Yongjian Nian et al [83] have proposed a lossless compression

algorithmic rule for hyper spectral images supported distributed source coding.

The algorithmic rule processes block with same location and size in every band.

The importance varies from block to the other block on the spectral orientation.

The algorithmic rule weighs the energy of every block beneath the target rate

constraints introduced. Additionally, a linear prediction model is employed to

construct the aspect data of every block for Slepian–Wolf coding.

Yeo et al [81] have proposed Feed Forward NN trained with the back

Propagation algorithmic rule to compress grayscale medical images. A system of

three stages of the hidden layer Feed Forward Network (FFN) is utilized

unswervingly. Once trained with sufficient variety of sample images, the

compression method is tested on the target image. Compression is then achieved

with the help of smaller variety of the hidden neuron as compared toward the

extent with the image pixel.

Ajay Kumar Bhagat et al [4] have developed a hybrid methodology

victimization SVD and DWT. This can be a suitable method to update the

decomposition, as well as the premise images. The DWT is employed to divide

the image into sub bands. Because the edges consider LH sub band, HL sub band

and HH sun-band, the impact of fusion is to be minimized.

Page 57: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

36

Ferni Ukrit et al [30] have developed compression technique that adds

the super spatial Structure Prediction with motion estimation and motion

compensation to obtain higher compression ratio. This is often enforced by an

easy block matching method Binary Tree Search.

Shruti Puniani et al [67] have mentioned some basic compression

techniques like Huffman, LZW coding, VQ compression.

Sridhar et al [62] have developed a wavelet based transform method

and NN based concept for image compression that makes use of both wavelet

transformations and NN. They also discussed how the coefficients present within

the less frequency bands are compressed by making use of Differential Pulse

Code Modulation (DPCM).

Abirami et al [2] have evaluated the performance of wavelet based

Support Vector Machines with completely different combination of methods of

wavelets and kernel function concepts. SVM regression is applied to wavelet

based coefficients to approximate the obtained coefficients from wavelets.

Higher compression is achieved by removing the redundancy.

Yongfei Zhang et al [82] have utilized quantization as a core

component for wavelet-transform related lossy image compression that

successfully minimizes the visual redundancy.

Saravanan et al [63] have developed a compression technique to

obtain more compression ratio by reducing number of source symbols. The

technique adapted to reduce the quantity of source symbols is by combining

symbols to make a reduced symbol. Therefore, the Huffman codes are generated

for the reduced symbols.

Nikita Bansal et al [51] have developed a scheme for image

compression with the use of both DCT concept and DWT concept provided

hybrid compression model. High energy compaction property is present in DCT

Page 58: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

37

and usually utilizes a fewer computational resources and multi resolution

transformation in DWT.

Chunlei Jiang et al [26] have proposed hybrid compression technique

that makes use of both fractal concept and SPIHT (set partitioning in hierarch

tree) concept. It utilizes total landscape characteristics and also the human visual

characteristics. The image is divided in the form of low and high frequency type

sub bands, afterwards the low subband frequency makes use of fractal type

technique.

Ali Al-Fayadh et al [73] have steered a hybrid lossy compression

technique that makes use of both classified vector quantization, singular value

decomposition. The methodology is termed as hybrid classified vector. It

involves a better classifier technique based on gradient within the spatial

domain, and utilizes ac coefficients present in the DCT coefficients. It

evaluates orientation of block while not using any form of threshold that leads to

hi-Fideld medical image compressed. The Singular value decomposition is

accustomed to generate the classified codebooks.

Mohamed El Zorkany et al [48] have developed a DCT compression

technique. This technique combines the compression ratio of Neural Network

(NN) and Vector quantisation (VQ) with the energy-compaction property of

DCT. It must increase the ratio of compression and also preserve the image

quality of reconstruct image, so image is compressed by NN.

Shaou-Gang Miaou et al [65] have proposed a technique that utilizes

both JPEG-LS and the concept of interframe coding with motion vectors to

provide the better compression performance. Since, the interframe correlation

between adjacent images in a sequence of medical image is typically not as high

as that in the case of general video image sequence. the interframe technique is

activated only if the interframe correlation is sufficiently high.

Page 59: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

38

Robina Ashraf et al [61] have introduced a technique, that provides

high CR's for images of type radiographic without any loss in quality of

diagnostic. During this process, an image is compressed with loss during first

stage at a high compression ratio and then blunder image again compressed

lossless. Finally ensuing compression isn't solely austerely lossless, other than

additionally it is expected to reach compression ratio to a high ratio, particularly

if lossy type of compression technique is chosen correctly. Neural Network

Vector Quantizer (NNVQ) will be employed as same as the lossy system.

Kaur et al [41] have developed an adaptive image-coding algorithmic

rule for compression of medical ultrasound (US) image within the wavelet based

domain. The histograms of wavelet related coefficients of the sub bands within

the North American nation images are heavy-tailed and might be highly shaped

by making use of the generalized Student’s t-distribution. Exploiting the

statistics, adaptive image coder named as JTQVS-WV is implemented, Rate–

Distortion (R–D) optimized quantizer and R–D optimum thresholding are

predicated.

Bairagi et al [18] have proposed an automated, efficient and low

complexity, lossless, scalable RBC for Digital Imaging and Communications in

Medicine (DICOM) images. RBC is utilized and the regions are segmented in a

number of different types based importance of region and the subjecting

changing bit-rates for optimal performance. Utilization of the integer wavelet

transform and technique of limited bit rate compression is used in minor

important regions which helps to reconstruct the image of desired quality.

Tamilarasi et al [73] have proposed an extension to the WT in separate

two dimensions by making use of non separable and the directional filter banks.

As MI is involved, the diagnosis part (ROI) is very vital. Initially ROIs are

segmented from the intact image by making use of NN related to FL technique.

Contour let transform is then applied to ROI portion. The region of less

Page 60: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

39

significance is made use of DWT and finally modified embedded zero tree

wavelet algorithm is applied and it uses six symbols instead of four.

Jonathan Taquet et al [38] have proposed a different hierarchical

process for resolution scalable lossless compression, Near-Lossless (NLS)

compression. It utilizes adaptability from the DPCM schemes, with the concept

of new type of hierarchical oriented predictor which provides scalable

resolution. Then the hierarchy oriented prediction performance is less when

utilized for smooth images. New predictors are introduced. They are

dynamically optimized employing a LS criterion.

Harjeetpal singh et al [31] have presented DWT and DCT

implementation, because these are the lossy techniques. He extended his

research with the Huffman encoding technique. At last he implemented lossless

technique. Its PSNR and the MSE enhance the results when compared to the

previous algorithms.

Monika Narwal et al [49] have proposed SPIHT –DCT algorithm for

compression of an image. SPIHT and DCT both have some limitations. By

combining them the limitations are overcome.

Kesavamurthy Thangavelu et al [42] have developed lossless method

related to volumetric MI compression process and decompression process by

making use of adaptive block concept related encoding technique. Further

algorithm is tested with various collections of CT color image with use of

MATLAB. The digital imaging and communications in medicine images be

compressed with the help of the proposed algorithmic rule and store as DICOM

format image. The contrary step of adaptive block related algorithm is utilized to

reconstruct actual image losslessly with the use of compressed files realted to

DICOM.

Page 61: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

40

Vidhya et al [77] have proposed an algorithm that extracts edge

information of MI by making use of fuzzy edge detector. The image is

disintegrated by utilizing the concept of cohen daubechies feauveau wavelet.

The hybrid technique is a combination of JPEG2000 and SPIHT. The

coefficients present in the approximate sub bands are encoded with the help of

tier-1 part of JPEG2000. The coefficients present in thorough sub bands are

encoded by making use of SPIHT. Finally, quality images are obtained from this

process at a lower bit rate compare to other compression technique.

Adnan Khashman [3] have related the Neural Network with the image

of radiograph contents to obtain image compression ratio at optimal image

quality. After the training, neural network provides the perfect Haar wavelet

compression ratio related to x-ray images when they are subjected to the

network.

2.3 CONCLUSION

In essence, this thesis summarizes the selected literature survey that

has been carried out in the area of Discrete Cosine Transform (DCT), Discrete

Wavelet Transform (DWT), Fractal Algorithm, Neural Network Back

Propagation (NNBP), Neural Network Radial Basis Function (NNRBF) and

Hybrid Techniques based approaches for Medical Image Compression.

Page 62: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

41

CHAPTER 3

METHODOLOGY

3.1 DISCRETE COSINE TRANSFORM (DCT)

A Discrete Cosine Transform (DCT) communicates a limited

grouping of information focuses as far as an entirety of cosine capacities

wavering at various frequencies. DCTs are useful in various fields such as

science and building, from lossy compression of sound (e.g. MP3) and images

(e.g. JPEG) (where little high-recurrence fragments can be disposed of) to

otherworldly procedures for the numerical game plan of mostly differential

scientific explanations. The utilization of cosine as opposed with sine capacities

is basic for compression, since it turns out (as portrayed beneath) less cosine

capacities that are expected to surmise a common flag. For various comparisons

the cosines express a specific decision of limit conditions.

Specifically, a Discrete Cosine Transform relates to a Fourier

transform that resembles Discrete Fourier Transform (DFT). DCTs are

proportional to DFTs in twofold length and doing with genuine information with

even symmetry, where in a considerable difference the data and/or yield data are

moved. There are eight standard DCT assortments of which four are universal. It

is mainly used for the specific image compression and one of the main tissues

according to the growth of technology is found that in the midst of a colossal

measure of data deferral, with such gigantic data can regularly show challenges.

The fundamental motivation behind image compression is to minimize the size

with no adjustment in the nature of image, so it is valuable to store the substance

in a given measure of circle furthermore it is helpful for transmission.

Page 63: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

42

Figure 3.1 Block diagram of DCT

Figure 3.1 shows that the given source of input image undergoes

Discrete Wavelet Transform, where quantization is done with the quantization

table. Thus, the quantized values are given to the entropy encoder. The entropy

values are subjected to pre-processing steps, which gives the compressed image

data.

3.1.1 Image Compression in DCT

JPEG is a standard leading group of trustees which has its own

particular inceptions inside the International Standard Organization (ISO). JPEG

may be adjusted to make little compacted images that are of reasonably low

quality in appearance, yet in the meantime fitting for a few applications. JPEG

gives a compression procedure that is set up for compacting unending tone data

with a pixel importance of 6 to 24 bits with sensible speed and viability.

JPEG is outlined especially to discard information that is not visible to

the human eyes. Slight changes in shading cannot be seen incredibly by the

human eye. Hence JPEGs lossy encoding is capable of storing with the dark

scale part of a image and to be more pointless with the shading.

The Discrete Cosine Transform (DCT) helps in separating the images

into parts (or nebulous vision sub-bunches) with respect to the image visual

quality. The DCT resemble the discrete Fourier change and it changes a sign or

image from the spatial zone to the repeat space. DCT has various inclinations.

Page 64: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

43

i. It has been realized in single facilitated circuit

ii. It can pack most data in smallest coefficients

iii. It minimizes the piece like appearance called blocking that

outcome when purposes of restriction between sub-images persuade the chance

to be perceivable.

Figure 3.2 Conversion of special domain to frequency domain

3.1.2 DCT Encoding

The general scientific formula for a 2D (M by M picture) DCT is

described by the accompanying comparison:

+

+

= ∑∑−

=

= Mvx

MuxyxfvCuC

MvuF

M

X

M

Y 2)12(cos

2)12(cos),()()(2),(

1

0

1

0

ππ (3.1)

for u=0,…..,M-1 and v=0,….., M-1

Where M=8 and (3.2)

The operation of the DCT is according to the accompanying

• The input signal is given as an image by the matrix M × M;

• The intensity of the pixel is given in row(i) and column (j) matrix

represented by f(i, j)

• The coefficient of DCT in row k1 and column k2 of the DCT grid is F(u,

v)

• The signal vitality lies at low frequencies for some images, which indicate

the upper left corner of the DCT.

Page 65: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

44

• Compression is expert ensuing to the lower right values, which address

higher frequencies, and are regularly sufficiently little to be disregarded

with minimal visible distortion.

• The input image of DCT is 8×8. This exhibits every pixel's grey scale

level.

• Every 8 bit pixel has levels from 0 to 255.

3.1.3 Compression Steps in DCT

The following steps are followed in image compression using DCT

• The image selected is considered as blocks of size 512 × 512.

• A high contrast image has pixel values varying from 0 to 511. Yet,

DCT considers pixel values extending in the range from - 128 to 127.

Along with this, each and every block is assigned to work in its range.

• For computing the DCT grid, equation 3.1 is used.

• Every block is connected to the DCT by increasing the adjusted block

with DCT framework on the left and transposes of DCT network to its

right side.

• Each and every block is compressed through quantization technique.

• The quantized framework is entropy encoded.

3.1.4 Quantization Steps

Quantization is obtained through compressing an arrangement of

qualities to singular quantum esteem. For this situation, the discrete images

amount is decreased and the stream ends up being more compressible. A

quantization grid is used as a part of blend with a DCT coefficient cross section

to finish the change. Quantization is the progression where the majority of the

Page 66: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

45

compression happens. DCT truly does not pack the images, since it avoids

lossless compression mode.

Quantization makes use of the way that the higher repeat segments are

less essential than the lower repeat parts. It awards various levels for image

compression and quality through choice of particular quantization systems. In

this way, the quality levels going from 1 to 100 can be picked up, where '1' gives

the poorest picture quality and '100' gives the best quality. JPEG board

recommends grid with quality level '50' as standard framework.

Quantization is a master in isolating the changed image structure.

Estimations of the resultant structure are then balanced off. In the resultant

lattice coefficients placed close to the upper left corner have lower values.

3.1.5 Entropy Encoding

After quantization, the high repeat coefficients will be zeros. To know

the number of zeros, a crosswise sweep framework is being utilized for yielding

the long string zeros. Once a bit has been changed over to an extent and

quantized, the JPEG pressure figuring then takes the result and changes over it

into a one dimensional direct appearance or vector of 64 qualities, playing out an

across yield by selecting the parts in the numerical deals appeared by the

numbers in the cross section underneath:

0 1 2 3 4 5 6 7

0: 0 1 5 6 14 15 27 28

1: 2 4 7 13 16 26 29 42

2: 3 8 12 17 25 30 41 43

3: 9 11 18 24 31 40 44 53

4: 10 19 23 32 39 45 52 5

5: 20 22 33 38 46 51 55 60

Page 67: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

46

This places the components of the coefficient block in a sensible

request of expanding frequency. Since, the higher frequencies will probably be

zero after quantization. This tends to group zero qualities in the high end of the

vector.

3.2 DISCRETE WAVELET TRANSFORM (DWT)

The Discrete wavelet transform is processed independently for various

fragments of the time-space signal at various frequencies. Multi-determination

analysis: analyzes down the signal at various frequencies giving diverse

resolutions. It is useful for the sign having high frequency parts for brief lengths

and low frequency segments for long duration. E.g. images and video frames.

The wavelet transformation is made out of an arrangement of low-pass

and high-pass channels. The following channel arrangements can be connected

similarly as a discrete FIR channel in DSP, utilizing the MACP order, aside from

as different progressive FIR channels. The low pass channel performs an

averaging/obscuring operation, and is expressed as:

H=1/√2(1,1) (3.3)

The differencing operation of high pass channel is communicated as follows:

G=1/√2(-1,1) (3.4)

On any adjacent pixel pair, the complete wavelet transform can be represented in

a matrix format.

(3.5)

TN NT W AW=

First half: Applying 1D Transformation to Rows of Image

Second half: Applying 1D Transformation to Columns of Image

Page 68: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

47

Where, A is the matrix representing the 2D image pixels, wavelet transformation

of the image.

1 1 0 0 0 0 0 02 2

1 10 0 0 0 0 02 2

1 10 0 0 0 0 02 2

1 10 0 0 0 0 02 2

1 1 0 0 0 0 0 02 2

1 10 0 0 0 0 02 2

1 10 0 0 0 0 02 2

1 10 0 0 0 0 02 2

NHWG

= = − − − − (3.6)

The consequence of the complete transformation, T, is made out of 4

new sub-images, which compare to the obscured images, and the vertical,

diagonal, and horizontal contrasts between the original picture and the blurred

image. The blurred representation of the image evacuates the subtle elements

(high frequency components).

Compressed Output

Figure 3.3 Block diagram of DWT

In the above Figure 3.3 the image is given as an input to get

compressed and it undergoes some of the preprocessing steps and it is followed

by some coding algorithm of wavelet transform. For further compression, the

arithmetic compression is used then finally reconstruction is done to get the

reconstructed image.

Input Image

Pre-Processing Wavelet

Transform Coding

Algorithm

Page 69: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

48

3.2.1 Advantages of DWT

No need to divide the data coding into non-covering 2-D

pieces, it has higher compression proportions ratios from

blocking antiques.

Allows great restriction both in time and spatial frequency

domain.

Transformation of the entire image

Introduces innate scaling

Better recognizable proof of which information is pertinent to

human perception higher compression ratio

3.2.2 Wavelets used in Image Compression

Wavelets are signs, which are close-by in time and scale and generally

have a irregular shape. A wavelet is a waveform of sufficiently obliged term that

has a typical estimation of zero. The expression "wavelet" starts from the way

that they incorporate to zero.

There are two methods of compression. They are lossy and lossless.

Here, DWT is one of the algorithms in lossless method. DWT is considered as

one of the important methods for image compression, where there is no loss of

information during the compression of image. Wavelets have more advantages

over the compressing signals.

DWT can be applied to the process of image compression by using the

threshold value. Applying DWT can help us to get different levels of bands.

After deciding the threshold value, these values will neglect the certain wavelet

coefficients. In wavelet change, the deterioration of a specific image comprises

two sections, one is the lower recurrence or approximation of an image (scaling

Page 70: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

49

capacity) and another is the higher recurrence or point by point part of a image

(wavelet capacity).

3.2.3 Aspects of Wavelets

DWT assumes an essential part to pack the given image without the

loss of any data in that specific image. DWT comes under the lossless sort of

image compression. Wavelets have more points of interest over packing signals.

The wavelet change is considered as the most beneficial and helpful

computational instruments for a variety of sign and image preparing

applications. Wavelet changes are basically utilized for images to decrease the

undesirable commotion and obscuring. Wavelet change has developed as the

most intense device for both information and picture pressure. Wavelet change

performs multi determination image analysis. DWT has effectively been utilized

as a part of numerous image preparing applications including noise reduction,

edge identification and compression.

When we apply high frequency (utilize high pass channel) on an

image, there are high varieties in the dark level between the two contiguous

pixels. So, edges are present in the image. When we apply low recurrence

(utilize low pass channel) on a image, there are smooth varieties between the

nearby pixels. So, edges are not produced. All data of image stays as same as

genuine picture data (it shows as estimate image). Figure 3.4 explains the three

level decomposition wavelet filters.

LL3 LH3 LH2

LH1

HL3 HH3

HL2 HH2 HL1 HH1

Figure 3.4 Three Level Decomposition Wavelet Filter

Utilizing DWT, images are decomposed into four sections:

approximate image, horizontal points of interest, vertical details and diagonal

Page 71: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

50

details. When we apply high recurrence on an image, there are high varieties in

the grey level between the two nearby pixels. When we apply low frequency on

an image, there are smooth varieties between the adjacent pixels.

All data of picture stay as same as genuine image data (it shows as estimate

image).

Figure 3.5 2-D Discrete Wavelet Transform in image compression

• Wavelet transform is fundamentally the same to the customary Fourier

change; however, it depends on the little waves, called wavelet, which is

made out of time differing and restricted term waves. We use 2-D discrete

wavelet transform in image compression.

• The data sign will be shifted into low pass and high go parts through

examination channels.

• The human recognition framework has distinctive affectability to various

recurrence bands

– The human eyes are less touchy to high recurrence band

shading parts

3.3 FRACTAL ALGORITHM

Fractal image compression is a lossy compression strategy for

advanced images, in the view of fractals. This compression technique is the most

Page 72: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

51

appropriate for surfaces as well as for normal images, depending on the way that

segments of an image frequently take after different segments of the same

image. Fractal calculations change over these parts into scientific information

called "Fractal codes", which are utilized to reproduce the encoded image.

3.3.1 Presentation about Fractal Algorithm

One of the lossy image compression techniques right now accessible

is the strategy for fractal image compression, created by Michael Barnsley and

his partners in 1987. The strategy is a restrictive innovation of Iterated Systems,

Inc., a firm helped to establish by Barnsley. Image compression techniques can

likewise be delegated either symmetrical or asymmetrical. Fractal image

compression, then again, is an illustration of asymmetrical strategies.

Asymmetric strategies take additional time/exertion compacting an image than

decompressing it. The thought is to do a large portion of the work amid the

compression.

Given a unique image (in digital, bit-mapped group), say B (here we

accept B is nonempty, generally there is not something to be compressed), with a

determination of M×N pixels, the image record comprises of a header took

succeeded by M×N cells of force information, one for every pixel. Given the

determination, the spatial directions of every pixel are suggested. The extent of

the cell connected with every pixel changes, contingent upon the sort of the

image as depicted beneath. This procedure is autonomous of the determination

of the original image.

The yield realistic will resemble the first at any determination, since

the compressor has found an IFS whose attractor reproduces the first one (i.e. an

arrangement of comparisons portraying the original image). Obviously, the

procedure takes a ton of work, particularly amid the quest for the suitable extent

districts. Be that as it may, once the compression is done, the FIF (Fractal Image

Format) record can be decompressed rapidly. In this way, the Fractal image

Page 73: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

52

compression is uneven. The down to earth executions of a fractal compressor

offer diverse levels of compression.

3.3.2 Features of Fractal Algorithm

With the Fractal compression, encoding is significantly

computationally exorbitant because of the search used to find the self-similitude.

Decoding however is altogether fast. While this asymmetry has so far made it

unrealistic for steady applications, when video is reported for dispersion from

circle stockpiling or record downloads, fractal compression ends up being more

focused.

At the essential compression ratios, up to around 50:1, Fractal

pressure compression gives equivalent results to DCT-based counts, for instance,

JPEG. At high compression extents, the Fractal compression may offer prevalent

quality. For satellite imagery, extents of more than 170:1 have been proficient

with commendable results. Fractal video compression extents of 25:1-244:1 have

been defined in sensible compression times (2.4 to 66 sec/layout). The

compression adequacy increases with higher image multifaceted nature and

shading significance, appearing differently in relation to the fundamental

grayscale images.

3.3.3 Fractal Image Compression

Imagine a remarkable kind of photocopying machine that reduces the

image to be copied fundamentally and rehashes it three times on the duplicate

(see Figure. 3.6). We can watch that all the duplicates appear to join to the same

resultant image. Since the replicating machine lessens the information image,

any basic image sets on the copying machine will be decreased to a point as we

run the machine; more than once, it is just the position and the introduction of

the duplicates figures out what the resultant image resembles.

Page 74: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

53

Figure 3.6 A photo copy machine that makes three reduced copies of the

input image

3.4 SET PARTITIONING IN HIERARCHICAL TREES (SPIHT)

SPIHT means Set Partitioning in Hierarchical Trees. This is proposed

by Pearlman in 1996. It is an image compressing method of DWT and it belongs

to lossless compression technique. SPIHT is a method of coding and decoding

the wavelet transform of an image. The spatial-orientation tree (or) the three-

Level Haar function of Wavelet transformation structure is used to describe how

an image gets split and compressed.

3.4.1 Haar Wavelet

In arithmetic, the Haar wavelet is a plan of arranging rescaled "square-

molded" capacities which together frame a wavelet family or premise. Wavelet

analysis resembles Fourier analysis, which allows a target limit over the time

duration and it needs to be addressed with respect to an orthonormal limit

premise. The Haar gathering is right now seen as the key allocated to wavelet

premise and comprehensively used as a teaching outline.

The Haar grouping was proposed in 1909 by Alfred Haar. Haar used

these abilities to give an outline of an orthonormal framework for the space of

square-integral limits on the unit between the time [0 and 1]. The examination of

Page 75: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

54

wavelets, and even the expression "wavelet", did not come until much later. As

an exceptional case of the daubechies wavelet, the Haar wavelet is generally

called D2.

The Haar wavelet is moreover the least troublesome possible wavelet.

The particular obstruction of the Haar wavelet is that it is not consistent, and

thus not differentiable. This property can regardless be inclination for the

examination of signs with sudden moves, for instance, checking of the

equipment disillusionment in machines.

The Haar wavelet's function can be described as

(3.7)

Its scaling function can be described as

(3.8)

3.4.2 Formation of Cells

The smallest conceivable square matrix produced from the wavelet

disintegrated image, has the same level of wavelet decay structure as the original

image.

Page 76: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

55

Figure 3.7 Formation of cells of parent-offspring conditions

The Haar wavelet transform is simple, and the better compression can be

achieved by other wavelet filters. It seems that the different wavelet filters produce

different results depending on the image type, but it is currently not clear what filter is

the best for any given image type. Regardless of the particular filter used, the image is

decomposed into sub bands, such that the lower sub bands correspond to the higher

image frequencies (they are the high pass levels) and the higher sub bands correspond

to the lower image frequencies (low pass levels), where most of the image energy is

concentrated (Figure 3.8). This is why we can expect the detailed coefficients to get

smaller as we move from high to low levels. Also, there are spatial similarities among

the sub bands. An image part, such as an edge, occupies the same spatial position in

each sub band. These features of the wavelet decomposition are exploited by the SPIHT

(Set Partitioning In Hierarchical Trees) method

3.4.3 Zero Tree Encoding

In zero tree based image compression arrangement, for instance EZW

and SPIHT, the point is to use the truthful properties of the trees with a particular

finished objective to adequately code the ranges of the tremendous coefficients.

Page 77: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

56

Since, most of the coefficients will be zero or close to zero, the spatial zones of

the critical coefficients make up an expansive bit of the aggregate size of a run

of the mill compacted image. A coefficient (in addition a tree) is seen as

enormous if its degree (or sizes of a center and each one of its relatives because

of a tree) is over a particular edge. By starting with a breaking point, which is

close to the best coefficient degrees and iteratively lessening the edge, it is

possible to make a pressed representation of a image which consistently

incorporates better detail. In light of the structure of the trees, it is likely that if a

coefficient in a particular repeat band is insignificant, then every one of its

relatives (the spatially related higher repeat band coefficients) will moreover be

immaterial.

3.4.4 SPIHT Algorithm

• O(i,j): set of coordinates of all posterity of hub (i,j); children alone.

• D (i, j): set of coordinates of all relatives of hub (i, j); children,

grandchildren, incredible amazing, and so on.

• H (i,j): set of all tree roots (hubs in the most noteworthy pyramid level);

folks

• L (i, j): D (i, j) – O (i, j) (all descendents aside from the posterity);

grandchildren, incredible amazing.

Figure 3.8 Spatial Orientation Tree in SPIHT

Page 78: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

57

3.5 INTRODUCTION TO NEURAL NETWORKS

For the most part, an image is isolated into a number of non-covering

pixel blocks, and sustained as examples for system preparing. In examination

with the vector quantization, the required and encoding/interpreting time is

substantially less. In any case, extremely restricted measure of compression is

accomplished, as it is misused just the connection between pixel inside of each

of the preparation designs.

The higher pressure proportion is accomplished in creating

progressive NN that costs vigorously because of the physical structure of the

NN. To make the picture pressure reasonable, it is compulsory to lessen the

gigantic size off the most picture information that in the long run diminishes the

physical structure of the NN. To decrease the size impressively, a few image

preparing steps like edge identification and thresholding are made and discussed.

The principle worry of the second period of the work is to adaptively decide the

structure of the NN that encodes the picture utilizing back engendering preparing

strategy.

Another strategy has been embraced while introducing the weight

amidst the middle layer and shrouded layer neurons in place of randomizing the

basic weight. Here, the spatial directions of the pixel of the image piece are

changed from two to one dimensional esteem and standardized with in '0' and '1'.

This methodology exhibits the quick rate of merging of the preparation

calculation and has been tried for various pictures. In this research, the

exploration of a managed learning calculation for fake neural systems i.e. the

error back spread learning calculation for a layered food forward system has

been executed for picture pressure and the investigation of the recreation

consequences of back propagation calculation is finished. There are two types of

learning algorithms in Neural Networks. They are supervised learning algorithm

and unsupervised learning algorithm.

Page 79: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

58

Figure 3.9 Block diagram of Neural Network

Figure 3.9 explains that the input image is given to the training kit. Then, this

training kit will compare this input image with its database. After doing this

training process, by using supervised or unsupervised learning algorithms

compressed image is produced.

3.5.1 Back Propagation Neural Networks (BPNN)

Most of the image compression methods depend on the back

propagation feed forward neural network which is able to find possible solutions

to the problem and for application in many fields where high computational rates

are required. Initially the image is decomposed into numerous pixels using

image compression. These pixels are then encoded and given as the input

training pattern to the network which is to be transmitted and then reconstructed

at the receiver side. In the back propagation process, the entire network consists

of input layer, output layer and one or more hidden layers.

The spatial co-ordinates of the pixel value are encoded and converted

from two to one dimensional values and finally compressed when the inputs are

multiplied with their corresponding weight to get the total sum of the input. This

result of weighted sum undergoes sigmoidal function to yield output pattern.

This is the first phase or forward phase. Once the output is gained then the error

is calculated after which the process is propagated reversely by finding the

changes occur between the output and hidden layer, hidden layer and input layer.

Input Image Training Kit Database

Compressed Image

Page 80: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

59

.

Figure 3.10 General Structure of BPNN

Initially the image is decomposed into numerous pixels using image

compression. These pixels are then encoded and given as the input training

pattern to the network which is to be transmitted and then reconstructed at the

receiver side. In the back propagation process, the entire network consists of

input layer, output layer and one or more hidden layers. When the input is given,

it gets multiplied with their corresponding weight to get the total sum of the

input. This result of weighted sum undergoes sigmoidal function to yield output

pattern. This is the first phase or forward phase. Once the output is gained then

the error is calculated after which the process is propagated reversely by finding

the changes occur between the output and hidden layer, hidden layer and input

layer.

Andrew et al [10] have proposed techniques explored two basic neural

systems (i.e., the BPNN-L and BPNN-R models) for online summed up network

reversal. In addition, two discrete-time Hopfield-sort neural systems (i.e., the

HNN-L and HNN-R models) are introduced for online arrangement of the

summed up converse. Seyun Kimand et al [64] have proposed this technique

unmistakably and portrayed CR, PSNR for cameraman image utilizing this

BPNN calculation. In his research, the packed image happens just at 1100, 1300,

1900 epoches etc.

Page 81: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

60

In any case, our commitment is that we have changed the execution of

CR, PSNR from existing paper Leyuan Fang et al [44]. In this research, we get

the compacted image at less time i.e. inside of the normal of 22 epoches for

information restorative images. Change of the execution contrasted with existing

papers can be found.

3.5.2 Image Compression using Back Propagation

The usage of back engendering neural system calculation on image

compression framework with great execution has been illustrated. The back

engendering neural system has been prepared and tried for the examination of

various images. It has been watched that the merging time for the preparation of

back proliferation neural system is quicker. Diverse characteristics of

compression, for example, compression proportion, crest sign to clamor

proportion, bits per pixel are ascertained. It has been watched that it is essential

to change the compression proportion from .99 to .9556 in the event of

cameraman image.

It has likewise been watched that there is a remarkable change in crest

sign to clamor proportion from 19.3181 to 20.722. The versatile qualities of the

proposed approach give seclusion in organizing the engineering of the system,

which accelerates the handling as well as less vulnerable to disappointment and

simple for amendment. The procedure of introducing weights displays quick rate

of joining and utilizing the prepared weight sets, great nature of recovered

images is accessible at the desirable end.

A standout amongst the most famous NN calculations is back

engendering calculation, asserted that BP calculation could be separated into

four fundamental steps. Subsequent to picking the weights of the system

haphazardly, the back engendering calculation is utilized to Figure the vital

rectifications. The calculation can be deteriorated in the accompanying four

stages:

Page 82: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

61

• Feed-forward calculation

• Back spread to the Output layer

• Back spread to the Hidden layer

• Weight redesigns

The calculation is halted when the estimation of the blunder capacity has

turned out to be adequately little. This is harsh and essential equation for BP

calculation.

3.5.3 Use of Image Compression in Back Propagation

On the other hand, here are a few circumstances where a BPNN may

be a smart thought:

• A vast measure of input/output information is accessible, yet you're not

certain how to relate it to the output.

• The issue seems to have overpowering unpredictability; however there is

plainly an answer.

• It is anything but difficult to make various case of the right conduct.

• The answer for the issue may change after some time, inside the limits of

the given data and yield parameters (i.e., today 2+2=4, however later on

we may find that 2+2=3.8).

• Outputs can be "fluffy" or non-numeric.

Training Algorithm

Step 1: Normalize the inputs and outputs regarding their greatest

qualities. It is demonstrated that the neural systems work better if

inputs and outputs lie somewhere around 0 and 1.

Step 2: The image is parted into non-covering sub-images. Say for

instance 512X512 piece image will be parted into 4 x 4 or 8 x 8 or

16 x 16 pixels.

Page 83: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

62

Step 3: The standardized pixel estimation of the sub-image will be

the contribution to the nodes. The three-layered Radial Basis

Function Network will prepare every sub-image.

Step 4: The quantity of neurons in the hidden layer will be

intended for the fancied compression. The quantity of neurons in

the output layer will be the same as that in the input layer.

Step 5: The output of the information layer is assessed utilizing the

exchange capacity for a radial basis neuron.

Step 6: The input layer and output layer are completely associated

with the hidden layer. The weights of neurotransmitters associating

info neurons and concealed neurons and the weight of

neurotransmitters interfacing shrouded neurons and the weight of

neurotransmitters interfacing concealed neurons and yield neurons

are introduced.

Step 7: The contribution to the hidden layer is figured by

multiplying the comparing weights of neural connections. The

hidden layer units assess the yield utilizing the exchange capacity

for a spiral premise neuron.

Step 8: The contribution to the output layer is processed by

increasing the relating weights of neural connections. The output

layer neuron assesses the yield utilizing direct capacity.

Step 9: The neural system is tried for various images. At that point

the yield downsizes to the first gray scale range.

Step 10: Calculate image quality parameters.

Page 84: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

63

3.5.4 Neural Network Radial Basis Function (NNRBF)

A Radial Basis Function system is an ANN that utilizes RBF as an

initiation capacity. The output of the system is a direct combination of radial

basis elements of the inputs and neuron parameters. Spiral premise capacity

systems have numerous utilizations, including capacity guess, time arrangement

expectation, grouping and framework control.

RBF system can be utilized to locate a set weight for a bend fitting

issue. The weights are in the higher dimensional space than the original data.

Learning is proportional to finding a surface in high dimensional space that gives

the best fit to training data. Hidden layers give an arrangement of capacities that

constitute a discretionary premise for input designs.

3.5.4.1 Radial basis function operation

ANN every neuron in a MLP (Multilayer Perception) holds the

weighted sum of its input values. That is, every input value is multiplied by a

coefficient and all the outcomes are summed up. A single MLP neuron is a plain

linear classifier, but difficult non-linear classifiers can be constructed by

introducing these neurons into a network. RBFN method is more spontaneous

than the MLP. To classify a fresh input, every neuron estimates the Euclidean

distance between the model and the input. Figure 3.13 shows the general

structure of NNRBF Algorithm. An input vector x is employed as input to all

radial basis functions with different properties. Each RBF neuron compares the

input vector to its model, and outputs a value in range [0, 1] which measures the

similarity. If the input is identical to the model, then RBF neuron’s output will

be 1. As the distance between the model and input increases, the output falls off

exponentially towards 0. RBF neuron’s output resembles a bell curve. The

output of the network is composed of a set of nodes. Every output node

calculates a score for the linked class. The score is calculated by taking a

weighted total of the activation values from each RBF neuron. By weighted total

Page 85: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

64

we mean that an output node links a weight value with every RBF neuron, and

multiplies the neuron’s activation by this weight before adding it to the total

output. The data can be shown as a vector of bona fide numbers. The yield of the

framework is then a scalar limit of the information vector, φ

Where, N is the amount of neurons in the hidden layer, Ci is within

vector for neuron i, and ai is the weight of neuron "i" in the linear output neuron.

Limits that depend just on the division from a center vector is radially symmetric

about that vector. Therefore it is named as Radial Basis Function. In the central

frame, all inputs are connected with each disguised neuron. The standard is

regularly taken as the Euclidean partition and the extended reason limit is

conventionally taken as Gaussian.

The Gaussian basis functions are local to the center vector in the sense that

i.e. changing parameters of one neuron has just a little impact for

information values that are far from the focal point of that neuron.

Given certain mild conditions on the shape of the activation function,

RBF networks are universal approximations on a compact subset of Rn. This

means that an RBF network with enough hidden neurons can approximate any

continuous function with arbitrary precision.

The parameters ai, Ci, and βi are resolved in a way that improves the fit

amongst φ and the data.

Page 86: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

65

Figure 3.11 General Structure of NNRBF Algorithm

An info vector "x" is utilized as a contribution to all outspread premise

works, each with various parameters. The output of the system is a direct

combination of the output from spiral premise capacities

The exact interpolation of a set of N data points in a multi-dimensional

space requires every one of the D dimensional input vectors xp i= {x p: i = 1,...

D} to be mapped onto the corresponding target output t p. The objective is to

discover a capacity f (x) such that

f (xp ) = t p p = 1,...,N (3.12)

Each RBF neuron stores a “prototype” vector, which is just one of the

vectors from the training set. Each RBF neuron compares the input vector to its

prototype and outputs a value between 0 and 1, which is a measure of similarity.

If the input is equal to the training, then the output of that RBF neuron

will be ‘1’ and the distance between the input and test data grows, the response

falls off exponentially towards 0. The shape of the RBF neuron’s response is a

bell curve, as illustrated in the network architecture diagram, Figure. 3.13.

Page 87: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

66

The neuron’s response value is also called as “activation” value. The

prototype vector is also often called as the neuron’s “center”, since it is the value

at the center of the bell curve.

3.5.4.2 Output nodes

The output of the network consists of nodes equivalent to the number

of output. Each output node computes a sort of score for the associated category.

Typically, a classification decision is made by assigning the input to the category

with the highest score.

The score is computed by taking a weighted sum of the activation

values from every RBF neuron. By weighted sum, it is meant that an output node

associates a weight value with each of the RBF neurons and multiplies the

neuron’s activation by this weight before adding it to the total response. Because

each output node is computing the score for a different category, every output

node has its own set of weights. The output node will typically give a positive

weight to the RBF neurons that belong to its category and a negative weight to

the others.

3.5.4.3 Training of RBF neural networks

As specified some time recently, preparing of a RBF neural system

can be acquired with the choice of the ideal values for the accompanying

parameters:

1) (w) is the weights in between the hidden layer and the output layer.

2) (β) is defined as the parameters of the neuron in the output layer

3) (c) be the centre vector in the hidden layer

4) (α)be defined as the parameters of the hidden layer base function.

Page 88: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

67

CHAPTER 4

COMPRESSION TECHNIQUES FOR MEDICAL IMAGES

USING FRACTAL, SPIHT AND DCT ALGORITHMS

4.1 INTRODUCTION

Recent radiology techniques offer crucial medical information for

radiologists to diagnose diseases and find out appropriate treatments. Such

medical information must be obtained through medical imaging processes.

Since, the medical images are in digital format, more cost-effective compression

technologies are required to reduce the mass volume of the digital image data

produced in the hospitals. Medical image compression is a challenging task as

the high frequency components contain details relevant for medical diagnosis. In

medical image compression, diagnosis is efficient only when compression

techniques conserve all the significant and important image information.

The idea of image compression technique is to minimize the

redundancy of the image data in order to store or transmit data in a competent

form. This results in the diminution of file size and allows more images to be

accumulated in a given amount of disk or memory space. Typically,

compression scheme can be categorized into two major categories: lossless and

lossy compressions. The lossy image compression is not very commonly used in

medical practice and diagnosis because even with a minor data loss, it is possible

that the physicians and radiologists fail to spot the critical information that could

be a crucial element for the diagnosis of a patient. In a lossless compression,

compressed data can be used to reconstruct an exact replica of the original

image; no information is lost due to the compression process. These necessities

are not satisfied with old techniques of compression like Fourier Transform,

Hadamard and Cosine Transform etc. due to high mean square error occurring

between original and compressed images. Fractal compression is a lossy

Page 89: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

68

compression method for digital images, based on Fractals. The method is the

best suited for textures and natural images, relying on the fact that parts of an

image often resemble other parts of the same image. Fractal algorithms convert

these parts into mathematical data called Fractal codes, which are used to

recreate the encoded image. Out of which, Set Partitioning in Hierarchical Trees

(SPIHT) is the powerful wavelet-based image compression method. Thousands

of people, researchers and practitioners have tested and used SPIHT.

4.2 FRACTAL, SPIHT AND DCT METHODS

Fractal, SPIHT and DCT by using this algorithm, we can get better

PSNR values and the error is much reduced in this process.

4.2.1 Fractal

Fractal image coding depends on the basis of Partition Iterated

Function System (PIFS), in which a unique information image is divided into an

arrangement of non-covering sub-blocks, called range obstruct spread over the

entire image. Figure 4.1 illustrates the steps involved in Fractal image

compression.

Figure 4.1 Flow diagram of Fractal coding

Page 90: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

69

4.2.2 Set Partitioning in Hierarchical Trees

The Set Partitioning in Hierarchical Trees (SPIHT) methodology is

not an easy extension of earliest ways for compression, and represents a

vigorous advance within the field. The SPIHT is a skilled image compression

routine utilizing the wavelet transform, where image coding exhausting the

wavelet change has intrigued unnecessary thought. The SPIHT has been

exceptionally effective. The tactic which deserves special attention and it

provides numerous advantages over the traditional methods. Figure 4.2

illustrates the basic operation involved in SPIHT method. Figure 4.3 illustrates

the formation of cells in SPIHT method.

Figure 4.2 Basic block diagram of SPIHT method

Figure 4.3 Formation of cells of SPIHT

Training Algorithm

Step 1: The medical image is being partitioned into small, non-overlapping, square blocks, typically called “parent blocks”.

Page 91: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

70

Step 2: Each parent block is divided into 4 blocks named as “child blocks.”

Step 3: Now compare each child block against a subset of all possible overlapping blocks of parent block size.

Step 4: Reduce the size of the parent block to allow the comparison to work.

Step 5: Determine which larger block has the lowest difference, according to some measurement.

Step 6: Now to match the intensity levels between large block and the child block we will be calculating the grayscale transform. Where an affine transform is used (w*x = a*x + b) to match grayscale levels.

Step 7: Upper left corner child block is very similar to upper right parent block.

Step 8: Compute affine transform.

Step 9: Store location of parent block (or transform block), affine transform components, and related child block into a file.

Step 10: Repeat for each child block.

4.2.3 Discrete Cosine Transform

It registers 2-D DCT of 8-by-8 blocks in an input image, It is widely

used as a part of the JPEG image compression calculation. At first, the info

image is separated into 8-by-8(see Figure.4.4) or 16-by-16 blocks, and the 2-D

DCT is processed for every block. The DCT coefficients are then quantized,

coded and transmitted. The JPEG collector (or JPEG file reader) interprets the

quantized DCT coefficients, processes the converse two-dimensional DCT of

every block and after that assembles the blocks again into a solitary image. For

normal images, a lot of considerable DCT coefficients have values near zero.

These coefficients can be disposed of without truly influencing the nature of the

reproduced image.

Page 92: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

71

v Figure 4.4 Two-dimensional DCT of 8-by-8 blocks in the image

Step 1: We need to convert an image into class double by reading an entire

image in the given workspace.

Step 2: The image is being computed by the 2-D of 8 by 8 blocks in the image

by mxm DCT matrix

Step 3: Discard all, but 10 of the 64 DCT coefficients in each block.

Step 4: The image is being reconstructed by using 2D inverse DCT of each

block.

Step 5: The reconstructed image is being easily recognizable that almost 85% of

DCT coefficients are discarded, while displaying the original image and

the reconstructed image side by side.

4.3 IMAGE QUALITY PARAMETER EVALUATION

The simulations are done utilizing MATLAB-Simulink and confirmed

utilizing scientific conditions. PET, CT and MRI images are favoured for an

intricate investigation. In Fractal, DCT and SPIHT calculation, it is important to

predefine a few parameters for contrasting their outcomes and past compression

algorithm. The various image parameters have been listed out from the

u

Page 93: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

72

examination. These quality measures are being used for the assessments of

imaging frameworks. It demonstrates the productivity of the calculation and

shows the outcome.

4.3.1 Performance Parameter

The quality of the compressed image can be measured by many

parameters. The most commonly used parameters are Compression Ratio (CR),

Mean Square Error (MSE), Peak Signal to Noise Ratio Error (PSNR), Bits per

pixel (Bpp), Elapsed Time and Memory.

A. Compression Ratio (CR)

It is defined as the ratio of the size of the original image to the size of the compressed image.

(4.1)

where the n1 and n2 are defined as the input and output of the given input image.

B. Mean Square Error (MSE)

MSE is utilized to appraise the nature of compacted image. The lesser

the estimation of MSE is the higher the nature of packed image. It can be

communicated as MSE.

(4.2)

Where f (x,y) is the original image and g (x,y) is the reconstructed image and m,

n are the rows and columns of input image.

C. Peak Signal to Noise Ratio (PSNR)

( ) ( )MSENgfPSNR maxlog20),( = (4.3)

It is characterized as the measure of the data image to the MSE. If

PSNR is high, then the quality of reconstructed image is also increased.

Page 94: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

73

D. Bits per pixel (Bpp)

The number of bits used to encode each pixel value is termed as Bpp.

Thus for the purpose of compression, Bpp should be minimum to reduce the

storage on the memory.

E. Elapsed Time

The compressed time gives the value of elapsed time in seconds from

the process of input image to the compressed image.

4.4 RESULTS AND COMPARISON

From Table 4.1, it is observed that, all the parameters obtained in the

evaluation test are in the acceptable limit. Hence it shows that after the

compression, the quality of the image is not degraded.

The simulation results for MR, PET and CT images are shown below

and the comparison tables are also included. The images with the highest CR and

PSNR are included below. The following figures (Figure 4.5 to 4.8) show the

comparison chart of CR, PSNR, Memory used and Execution time of various

medical images for methods like Fractal, Neural Network Back Propagation and

Radial Basis Function Neural Network.

The analysis of various techniques depicts that SPIHT provides better

CR values among all the techniques (DCT, Fractal). But DCT provides higher

PSNR values with less execution of time. And it also occupies less memory

space as compared to other techniques (SPIHT, Fractal). Therefore, we can

conclude that DCT compression method is best suited to MR, PET and CT

medical images.

Page 95: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

74

Table 4.1 Shows performance comparison of 24 medical images which are obtained by using DCT, SPIHT

and Fractal algorithm

CR PSNR MEMORY EXECUTION TIME Images DCT SPIHT Fractal DCT SPIHT Fractal DCT SPIHT Fractal DCT SPIHT Fractal CT Image 1 1.4088 0.4410 6.6507 88.8087 38.1902 38.2519 21.40 66.40 26.80 0.7572 42.9628 11.7500 CT Image 2 1.3442 0.3542 6.6622 88.3888 37.1780 38.7416 23.20 60.70 28.40 0.7645 45.9385 11.7969 CT Image 3 1.4679 0.5264 13.2030 85.8091 36.8513 35.8632 31.60 75.50 43.20 0.7522 100.4340 16.3281 CT Image 4 1.3443 0.5526 5.0707 92.4192 36.7236 38.9335 21.10 60.40 25.00 0.8272 35.4521 10.8594 MR Image 5 1.4402 0.9332 6.5164 90.5865 36.3537 43.6002 18.00 66.20 21.90 0.9514 60.0308 12.0000 MR Image 6 1.6127 1.3050 10.9643 91.8776 36.3546 41.7960 18.90 65.90 26.40 0.8773 46.2459 14.8125 MR Image 7 1.3586 0.7977 4.4960 92.2370 35.8857 39.0490 22.60 61.10 23.00 0.7620 32.1609 10.5625 MR Image 8 1.5927 0.8920 7.5873 97.7195 35.8076 42.3063 19.20 65.60 25.10 1.5385 69.1809 12.3906 MR Image 9 1.5900 0.5976 5.1806 101.4771 35.3169 43.6769 15.70 52.60 20.50 0.7825 20.8594 10.9688 MR Image 10 1.2792 0.2437 6.9771 90.5101 35.9551 43.3527 20.50 66.40 23.70 1.1055 39.8822 12.1094 MR Image 11 1.3006 0.1853 6.5397 90.6810 35.8193 43.0880 19.60 73.00 23.00 0.8540 35.8102 11.9219 MR Image 12 1.4355 0.4519 3.4217 99.6740 35.4337 40.7357 17.50 61.50 22.30 0.7832 39.5033 9.8438 MR Image 13 1.4994 0.4558 1.8435 114.2706 36.6741 43.4892 12.40 46.10 17.80 0.7919 39.6286 9.0625 MR Image 14 1.3451 0.1391 2.3289 97.6726 36.5000 41.3458 14.40 36.20 15.90 1.4770 48.2454 9.0938 MR Image 15 1.6541 0.6722 3.0478 98.4225 35.9084 39.6035 17.20 56.90 20.50 0.7612 48.3930 9.4844 MR Image 16 1.5732 1.0766 3.4444 95.1357 35.8668 39.9027 18.20 53.90 22.70 0.8803 45.3308 10.0625 MR Image 17 1.4197 0.1334 4.8070 96.8298 35.7687 39.0449 23.40 62.40 29.90 1.0694 45.0365 10.7031 MR Image 18 1.6541 0.6722 3.0478 98.4225 35.9084 3.0478 17.20 56.90 20.50 0.9104 38.9254 9.5000 MR Image 19 1.7575 0.1213 3.4270 90.0821 36.3353 3.4270 26.80 40.70 28.20 1.0884 51.7431 10.1094 MR Image 20 1.0884 0.4835 4.1237 91.1779 36.2339 34.1952 27.40 51.90 32.70 1.2724 45.5717 10.5156 MR Image 21 1.2792 0.2437 6.2566 90.5101 35.9551 43.3019 20.50 66.40 20.60 0.7581 46.0090 11.5313 PET Image 22 1.5891 0.5745 5.5154 97.2405 36.1077 43.3578 14.80 59.20 19.20 0.8768 42.3300 11.2656 PET Image 23 1.4908 0.4473 1.4721 112.8808 35.1495 43.9157 9.97 41.10 14.30 0.7791 28.1462 8.5625 PET Image 24 1.5791 0.3454 1.1946 112.1955 36.6452 39.3448 13.20 34.50 18.80 1.0188 41.1812 8.8125

Page 96: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

75

Figure 4.5 Compression Ratio expressed in percentage

Figure 4.5 shows the compression ratio of three different algorithms

namely DCT, SPIHT and Fractal. It is clearly evident that SPIHT provides better

CR values.

Figure 4.6 Shows the PSNR for three different algorithms DCT, SPIHT and Fractal.

Page 97: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

76

Figure 4.7 Memory expressed in kilo byte

Figure 4.7 shows the memory usage of three different algorithms

namely, DCT, SPIHT and Fractal. It is noticed that DCT uses lesser memory for

image compression.

Figure 4.8 Execution Time expressed in Seconds

Figure 4.8 Shows the Execution time of three different algorithms

namely, DCT, SPIHT and Fractal. Here, DCT produces compression results

within minimal time duration.

Page 98: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

77

Figure 4.9 (A) Results obtained for various medical images

(a).Input Image (b).DCT (c).SPIHT and (d).Fractal algorithms

Page 99: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

78

Figure 4.9 (B) Results obtained for various medical images

(a).Input Image (b).DCT (c).SPIHT and (d).Fractal algorithms

Page 100: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

79

Figure 4.9 (C) Results obtained for various medical images

(a).Input Image (b).DCT (c).SPIHT and (d).Fractal algorithms

Page 101: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

80

Figure 4.9 (D) Results obtained for various medical images

(a).Input Image (b).DCT (c).SPIHT and (d).Fractal algorithms

Page 102: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

81

Figure 4.9 (E) Results obtained for various medical images

(a).Input Image (b).DCT (c).SPIHT and (d).Fractal algorithms

Page 103: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

82

Figure 4.9 (F) Results obtained for various medical images

(a).Input Image (b).DCT (c).SPIHT and (d).Fractal algorithm

Page 104: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

83

The image quality parameters can be represented in graphical form to

study the character of each parameter with respect to other parameters for better

understanding. This graphical representation of test images with respect to CR,

PSNR, Execution time and memory is shown in the Figures 4.5 to 4.8. The

graphical representation of test input images with respect to MRI, CT and PET

and Elapsed time in seconds is shown in the Figures 4.9A to 4.9F.

4.5 CONCLUSION

In this chapter, to enhance the performance of three completely

different approaches which are compared for medical images like Fractal,

Discrete Cosine Transform and Set Partitioning in Hierarchical Trees, these

approaches are tested against completely different medical images like human

MRI image, PET and CT images, using specific image quality parameters like

Compression Ratio, Bits per pixel, Peak Signal Noise Ratio and Mean Square

Error. The results clearly show that SPIHT methodology has higher Compression

Ratio (CR) and PSNR value with less BPP and MSE for PET and MR brain

images. Future work lies in developing a neural network framework, so as to

realize the higher compression results.

Page 105: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

84

CHAPTER 5

EFFICIENT IMAGE COMPRESSION TECHNIQUES FOR

MULTIMODAL MEDICAL IMAGES USING RADIAL BASIS

FUNCTION NEURAL NETWORK APPROACH

5.1 INTRODUCTION

Recent radiology techniques offer vital medical information for

radiologists to diagnose diseases and find out appropriate treatments through

image processing techniques. Image compression is one of the image processing

techniques, which is used to reduce the number of bits that are required to

represent an image. Since, the medical images are in digital format, more cost-

effective compression techniques are required to reduce the mass volume of

digital image data produced in the hospitals. Medical image compression is a

challenging task as the high frequency components may contain important

information for medical diagnosis. In medical image compression applications,

diagnosis is efficient only when compression techniques preserve all the

significant and important image information. The idea of image compression

technique is to minimize the redundancy of the image data, in order to store or

transmit the data in a competent form. This results in size reduction and allows

more images to be accumulated in a given amount of disk or memory space.

Typically, compression scheme can be categorized into two major categories:

lossless and lossy compressions. The lossy image compression is not commonly

used in medical diagnosis, because it fails to interpret the critical information

for radiologists to diagnose the patient. In a lossless compression, no useful

information is lost due to the compression process. Huffman comes under lossless

and Fractal comes under lossy image compression. An image with size 512×512

is given as an input to all these above methods.

Page 106: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

85

5.2 ALGORITHMS FOR IMAGE COMPRESSION

Different compression methods such as Fractal, Neural Network Back

Propagation (NNBP) and Radial Basis Function Neural Network (RBFNN) are

applied to various medical images such as MR and CT images. Experimental

results show that the NNRBF technique achieves a higher Compression Ratio

(CR), Bits per pixel (Bpp) and Peak Signal to Noise Ratio (PSNR), with less

Mean Square Error (MSE) on CT and MR images when compared to Fractal and

Neural Network Back Propagation techniques.

5.2.1 Fractal Algorithm

Fractal encoding is to a great extent used to change over bitmap images

to fractal codes. This encoding procedure is to a great degree computationally

escalated. Millions of cycles is being required to discover the fractal designs in a

image. Contingent on the determination and substance of the info bitmap

information and yield quality, compression time, and record size parameters

chose, packing a solitary image can take anyplace from a few moments to a

couple of hours on even a quick PC. All the decoding process needed to do is to

interpret the Fractal codes and translate them into a bitmap image. Two huge

advantages are instantly acknowledged by changing over routine bitmap images

to Fractal information. The first is the ability to scale any Fractal image up or

down in size without the presentation of image curios or an adversity in

inconspicuous component that happens in bitmap images. The strategy of "Fractal

zooming", which is free of the determination of the main bitmap image and the

zooming is compelled just by the measure of available memory in the PC. The

second point of interest is the way that the measure of the physical data is used to

store the fractal codes, which is much smaller than the extent of the primary

bitmap data. Really, it is not exceptional for the fractal images to be more than

Page 107: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

86

100 times humbler than their bitmap sources. This part of Fractal development,

called Fractal compression has advanced the best eagerness inside the PC imaging

industry. The process of matching Fractals does not involve looking for exact

matches, but instead looking for the "best fit", it matches based on the

compression parameters (encoding time, image quality, and size of output). But

the encoding process can be controlled to the point where the image is "visually

lossless." That is, the occurrence of the data loss is undetectable. Fractal

compression contrasts from other lossy compression techniques, for example,

JPEG, in various ways. JPEG fulfills the compression by discarding the image

data that are not required for the human eye to see the image. The resulting data

are then further compacted by using a lossless procedure for compression. To

finish more significant compression extents, more image data must be discarded,

achieving a low quality image with a pixelized (blocky) appearance. Fractal

images are not considered as an aid of pixels, nor the encoding weight to the

visual characteristics of the human eye. Or maybe, bitmap data is discarded when

it is required to make a best-fit Fractal outline. More noticeable compression

extents are expert using the more vital computationally genuine changes that may

degrade the image, yet the bending appears significantly more normal in view of

the Fractal portions.

5.2.2 Neural Network Back Propagation (NNBP)

The back-propagation learning algorithm is one of the most

significant improvements in neural networks. This learning algorithm can be

mainly applied to feed-forward networks that can be consists of dispensation

elements with uninterrupted differentiable activation functions. In BPNN,

training input-output pair is given as an input for training, this algorithm make

available a procedure for varying the weights in a BPNN to categorize the given

Page 108: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

87

input patterns appropriately. The very vital concept for this weight update

algorithm is gradient-descent method. The back-propagation algorithm is

completely different from other networks in respect to the process by which the

weights are calculated during the learning period of the network. There will be

three different layers, one input layer, one output layer and one hidden layer, are

assigned. Both of input layer in BPNN and output layer are fully connected to

hidden layer. Compression is obtained by designing the value of the number of

neurons in the both input layer and output layers neuron less than the hidden

layer neuron. Figure 5.1 shows General structure of Neural Network Back

Propagation Algorithm.

Figure 5.1 General structure of Neural Network Back Propagation

Algorithm.

5.2.3 Neural Network Radial Basis Function for Image Compression

Radial basis function neural networks (RBFNN) are feed-forward

networks trained using a supervised training algorithm. They are normally put

together with a single hidden layer of units whose output function is selected

from a class of functions called basis functions. The structure of an RBF

networks in its most basic form involves three entirely different layers as

shown in Fig.4. The input layer is made up of source nodes (sensory units)

whose number is equal to the dimension N of the input vector. The second layer

Page 109: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

88

is the hidden layer which is composed of nonlinear units that are connected

directly to all of the nodes in the input layer. Each hidden unit takes its input

from all the nodes at the components at the input layer. The hidden unit

contains a basis function, which has the parameters center and width. Figure 5.2

shows the General structure of Radial Basis Function Neural Network.

Figure 5.2 General structure of Radial Basis Function Neural Network.

5.3 PERFORMANCE PARAMETERS

There are several parameters that can be used to compare the

various image compression techniques. The efficiency of the compression

algorithm is measured in terms of performance measuring parameters such as

Compression Ratio (CR), Peak Signal Noise Ratio (PSNR), Bits per pixel (Bpp),

Mean Square Error (MSE) and Testing and Training Time.

5.4 RESULTS AND COMPARISON

The simulation results for various medical images and the comparison

tables are also included. The images with the highest CR and PSNR are included

below. The following figures (Figure 5.3 to 5.6) show the original and

reconstructed image of MR and CT images by using Fractal, NNBP and NNRBF

methods.

Page 110: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

89

Table 5.1 Shows performance comparison of 24 medical images which are obtained by using Fractal, NNRBF

and NNBP algorithm

CR PSNR MEMORY EXECUTION TIME

Images Fractal NNRBF NNBP Fractal NNRBF NNBP Fractal

NNRBF NNBP Fractal NNRBF NNBP

CT Image 1 6.6507 1.0537 1.0495 38.2519 39.1906 69.6090 26.80 28.60 28.70 11.7500 17.4231 850.4214 CT Image 2 6.6622 0.8386 1.0356 38.7416 29.9663 42.3719 28.40 37.20 30.10 11.7969 16.4425 278.7227 CT Image 3 13.2030 1.0632 1.0330 35.8632 30.6855 62.5396 43.20 43.60 44.90 16.3281 18.6835 542.2186 CT Image 4 5.0707 1.1561 1.0328 38.9335 19.5331 40.2157 25.00 24.50 27.40 10.8594 18.5866 935.6339 MR Image 5 6.5164 0.9678 1.0564 43.6002 32.7573 62.0237 21.90 26.70 24.50 12.0000 18.5779 387.0713 MR Image 6 10.9643 1.1854 1.0423 41.7960 22.9515 46.7436 26.40 25.70 29.20 14.8125 18.2230 667.3835 MR Image 7 4.4960 1.2092 1.0398 39.0490 22.3658 44.7684 23.00 25.40 29.50 10.5625 18.2627 658.0410 MR Image 8 7.5873 1.117 1.0442 42.3063 29.9071 48.5713 25.10 27.40 29.30 12.3906 18.7018 1164.2615 MR Image 9 5.1806 1.1416 1.0208 43.6769 24.6009 45.7868 20.50 21.90 24.50 10.9688 18.3615 836.9937 MR Image 10 6.9771 1.0116 1.0591 43.3527 44.3570 73.0390 23.70 25.90 24.80 12.1094 18.6340 751.2015 MR Image 11 6.5397 0.9904 1.0605 43.0880 44.2258 68.0856 23.00 25.80 24.10 11.9219 18.7667 294.9351 MR Image 12 3.4217 0.956 1.0536 40.7357 29.3755 54.1458 22.30 26.30 23.90 9.8438 17.8159 415.8369 MR Image 13 1.8435 1.0728 1.0837 43.4892 50.9330 66.8312 17.80 17.30 17.10 9.0625 17.9883 497.0650 MR Image 14 2.3289 1.1096 1.0728 41.3458 35.7803 47.6965 15.90 71.50 18.10 9.0938 17.2854 313.2422 MR Image 15 3.0478 1.0365 1.0514 39.6035 46.2050 72.3725 20.50 27.40 27.00 9.4844 17.7939 285.6100 MR Image 16 3.4444 1.0499 1.0513 39.9027 32.9302 57.2598 22.70 27.30 27.30 10.0625 18.7660 610.1797 MR Image 17 4.8070 0.9978 1.0426 39.0449 37.5945 84.0402 29.90 33.30 31.90 10.7031 26.7360 843.5427 MR Image 18 3.0478 1.0365 1.0514 3.0478 1.0365 72.3725 20.50 27.40 27.00 9.5000 27.1599 285.4747 MR Image 19 3.4270 1.1379 0.9938 3.4270 1.1379 36.2982 28.20 41.40 47.10 10.1094 17.9907 654.8289 MR Image 20 4.1237 1.003 1.0398 34.1952 38.2271 69.1297 32.70 40.60 39.20 10.5156 18.1835 504.8591 MR Image 21 6.2566 1.0636 1.0591 43.3019 32.7369 73.0390 20.60 22.90 24.80 11.5313 18.5056 744.2455 PET Image 22 5.5154 1.0352 1.0530 43.3578 33.4930 50.2760 19.20 22.80 22.40 11.2656 18.8691 652.3676 PET Image 23 1.4721 0.7923 0.7662 43.9157 24.4163 42.7221 14.30 18.70 19.40 8.5625 16.5292 452.7775 PET Image 24 1.1946 1.0696 1.0709 39.3448 52.1345 84.0081 18.80 19.60 19.50 8.8125 16.7909 378.8078

Page 111: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

90

Fractal, NNBP and NNRBF, of which NNRBF produces better compression ratio.

Figure 5.3 Compression Ratio expressed in percentage

Figure 5.4 Shows the Peak Signal to Noise Ratio of the algorithms Fractal,

NNBP and NNRBF. It is clearly identifiable that NNRBF is capable of producing high

PSNR values.

Figure 5.4 Peak Signal to Noise Ratio expressed in decibels

Page 112: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

91

Figure 5.5 Memory expressed in kilo byte

Figure 5.5 shows the memory usage of three different algorithms namely, Fractal, NNRBF and NNBP. It is noticed that Fractal uses lesser memory for image compression.

Figure 5.6 Execution Time expressed in Seconds

Page 113: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

92

Figure 5.6 Shows the Execution time of three different algorithms namely, Fractal, NNRBF and NNBP. Here, Fractal produces compression results within minimal time duration.

Figure 5.7(A) Results obtained for various medical images

(a).Input Images (b).Fractal (c).Neural Network Back Propagation (NNBP)

and (d).Radial Basis Function Neural Network algorithms

Page 114: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

93

Figure 5.7(B) Results obtained for various medical images

(a). Input Images (b). Fractal, (c). Neural Network Back Propagation (NNBP) and (d). Radial Basis Function Neural Network algorithms.

Page 115: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

94

Figure 5.7 (C) Results obtained for various medical images

(a). Input Images (b). Fractal, (c). Neural Network Back Propagation(NNBP) and (d). Radial Basis Function Neural Network algorithms.

Page 116: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

95

Figure 5.7 (D) Results obtained for various medical images

(a). Input Images (b). Fractal, (c). Neural Network Back Propagation(NNBP) and (d). Radial Basis Function Neural Network algorithms.

Page 117: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

96

Figure 5.7(E) Results obtained for various medical images (a). Input Images (b). Fractal, (c). Neural Network Back Propagation(NNBP)

and (d). Radial Basis Function Neural Network algorithms.

Page 118: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

97

Figure 5.7(F) Results obtained for various medical images

(a). Input Images (b). Fractal, (c). Neural Network Back Propagation(NNBP) and (d). Radial Basis Function Neural Network algorithms.

The above figures (Figure 5.7A to 5.7F) show the original and

reconstructed image of MR and CT images by using Fractal, Neural Network

Back Propagation methods and Radial Basis Function Neural Network.

Page 119: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

98

5.5 CONCLUSION

In this chapter, four different approaches such as Huffman, Fractal,

Neural Network Back Propagation and Radial Basis Function Neural Network

algorithms are compared for medical image compression. These approaches are

tested with different medical images such as MR and CT images. In order to

identify a better compression method, specific image quality parameters like

compression ratio, bits per pixel, peak signal to noise ratio, mean square error and

execution time have been calculated. The results clearly show that the Radial

Basis Function Neural Network method has low Compression Ratio (CR) and

high PSNR value with less BPP and MSE for MR and CT images. Thus, Radial

Basis Function is found to be efficient when compared to Huffman, Fractal and

Neural Network Back Propagation algorithms. Future work is to develop a hybrid

approach by combining two or more algorithms, in order to achieve better

compression results. Further, it is essential to prepare a hybrid approach by

mixing two or more soft computing techniques to achieve better compression

results.

Page 120: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

99

CHAPTER 6

A HYBRID DISCRETE WAVELET TRANSFORM WITH

NEURAL NETWORK BACK PROPAGATION APPROACH

FOR EFFICIENT MEDICAL IMAGE COMPRESSION

6.1 INTRODUCTION

An image as a rule comprises of colossal measure of information and

requires an expansive number of space in the memory. Compacted image

possesses less number of spaces in memory and it requires less time for

transmission. The principle capacity of image compression is to improve the

information that may debase the image, contingent upon the compression ratio

(CR). CR is one of the best parameters to get great nature of a compressed image.

By evaluating this CR, the quality of an image may be predicted. Usually, the

input image will be in the form of analog images. Thus these analog images can

be sampled and quantized to get the digital images. By using these digital images,

the compression algorithm can be made available. For example, in order to

transfer an image of size 512×512, it will take around a few minutes to reach the

receiver. Thus by utilizing the compression algorithm, many of the medical

images can be sent simultaneously in less time duration. The importance of this

image compression is to decrease the cost for storage space and communication.

6.2 ALGORITHMS USED

6.2.1 Back Propagation Neural Networks Algorithm

Neural Network is nowadays an important emerging tool that can be

very applicable to image processing techniques. There will be many training pairs

in BPNN, the most important and useful training pair is input-output pair. The

BPNN algorithm will be able to give the procedures for varying the weights after

Page 121: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

100

giving the input; this input to output pair will categorize the given input patterns.

Gradient-descent method is one of the apt methods for weight updation in BPNN.

In BPNN, the weight can be computed amid the learning time of the system, in

this way it’s different from all other algorithms. Back propagation algorithm

involves three different types of layers namely input, output, and hidden layer.

The value of neurons in BPNN can be evaluated by both input and output layer.

Figure 6.1 General Structure of BPNN

The first step of image compression in BPNN is to decompose the input

images in pixels; this can be done by the algorithm of BPNN. These pixels, which

are encoded in previous step can be given as an input to the network. Now, this

image is transmitted and recovered in the receiver side. There are three important

layers in BPNN, which are named as input layer, hidden layer, output layer,

where this hidden layer should be more than one. The next process is to encode

spatial coordinates of the pixel. Entropy encoding, which is a way of lossless

compression will convert the image from two to one dimensional value and then

the image is compressed. After getting the compressed image, the error can be

calculated in all the three layers.

Page 122: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

101

6.2.2 Discrete Wavelet Transform

There may be different types of image processing techniques in which

this DWT can be of successive high pass and low pass filter, where the images

can be divided into pixels. Decomposition in wavelet transform consist of two

parts, i) approximation of an image (scaling function) and ii) detailed part of an

image (wavelet function).

Nowadays, in the developing world, many researches are done on

wavelet representation and transforms. The area which is having no noise can be

represented as plain areas in an image. These areas will have very high degree of

redundancy. This chapter discusses the hybrid combination of DWT and NNBP.

In order to get better compressed image without degrading the quality of image,

there should be low CR and PSNR. Thus in this chapter, we hybrid these two

algorithms viz., DWT and BP and this gives better CR and PSNR.

Figure 6.2 Block diagram of Hybrid DWT-BP Algorithm

Here the input images of size 512×512 are given to get compressed.

First, these images can be given to DWT algorithm which undergoes image

compression process and the image which gets out from DWT algorithm be the

input given to BPNN for further compression. The Image obtains as the output

from the BPNN be the compressed image, which has better CR and PSNR. We

can predict that our proposed method will be having efficient CR and PSNR.

Thus the comparison chart for both existing method and proposed method is

given in Figure 6.3.

Input Images

Hybrid DWT-BP DWT Algorithm BPNN Compressed

Images

Page 123: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

102

Figure 6.3 Comparison chart of proposed work and existing method

6.3 PERFORMANCE PARAMETERS

Many parameters are used to measure the quality of any compressed

image. Commonly used parameters are Mean Square error (MSE), Peak signal to

Noise Ratio (PSNR), Compression Ratio (CR) and Bits per pixel (BPP).

6.4 RESULTS AND DISCUSSION

From Table 6.1, it is observed that, all the parameters obtained in the

evaluation test are in the acceptable limit. Hence it shows that after the

compression, the quality of the image is not degraded. The simulation results for

MR, PET and CT images are shown below and the comparison tables are also

included. The images with the highest CR and PSNR are included below. The

following figures (Figure 6.4 to 6.7) show the comparison chart of CR, PSNR,

Memory used and Execution time of various medical images for methods like

DWT, NNBP and Hybrid DWT with NNBP.

Page 124: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

103

The analysis of various techniques depicts that Hybrid DWT with

NNBP provides better CR values and it occupies less memory space as compared

to other techniques (DWT, NNBP). DWT provides higher PSNR values with less

execution of time. Therefore, we can conclude that Hybrid DWT with NNBP

compression method is best suited to MR, PET and CT medical images in terms

of CR and memory space.

Page 125: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

104

Table 6.1 Shows performance comparison of 24 medical images which are obtained by using DWT, BPNN

and hybrid DWT-BP algorithm.

CR PSNR MEMORY EXECUTION TIME

Images DWT NNBP Hybrid DWT-BP

DWT NNBP Hybrid DWT-BP DWT NNBP Hybrid

DWT-BP DWT NNBP Hybrid DWT-BP

CT Image 1 1.0002 1.0495 1.0005 71.7728 69.6090 61.3201 28.70 28.70 24.1 1.0867 850.4214 848.0286 CT Image 2 1.0011 1.0356 1.0121 71.3010 42.3719 39.1269 29.80 30.10 17.2 1.0818 278.7227 279.6484 CT Image 3 1.0015 1.0330 1.0016 55.7033 62.5396 47.2912 44.80 44.90 30.3 2.3126 542.2186 546.848 CT Image 4 1.0020 1.0328 0.568 67.3440 40.2157 29.4004 26.90 27.40 26.9 1.6483 935.6339 938.7183 MR Image 5 1.0005 1.0564 0.9254 67.7753 62.0237 38.5935 24.50 24.50 51.4 1.1104 387.0713 354.3677 MR Image 6 1.0048 1.0423 0.8844 63.8786 46.7436 31.9976 28.90 29.20 70.5 1.1089 667.3835 668.1684 MR Image 7 1.0229 1.0398 0.0969 62.7552 44.7684 29.4417 28.60 29.50 49.6 1.2349 658.0410 656.5766 MR Image 8 1.0265 1.0442 0.5601 60.1968 48.5713 23.6815 28.40 29.30 44.9 1.0894 1164.2615 874.3936 MR Image 9 1.0144 1.0208 0.0999 64.2066 45.7868 25.814 23.30 24.50 55.5 1.0646 836.9937 798.0145 MR Image 10 1.0001 1.0591 1.005 67.8580 73.0390 47.1406 24.80 24.80 11.6 1.1229 751.2015 786.3682 MR Image 11 0.9999 1.0605 0.5603 68.4217 68.0856 43.8376 24.10 24.10 10.3 1.1197 294.9351 287.1838 MR Image 12 1.0073 1.0536 0.8914 67.9670 54.1458 38.7958 23.70 23.90 22.1 1.1865 415.8369 425.006 MR Image 13 1.0013 1.0837 0.9743 72.8183 66.8312 38.0442 17.10 17.10 17.8 1.0898 497.0650 504.9017 MR Image 14 1.0006 1.0728 0.0518 72.5798 47.6965 24.6016 18.00 18.10 3.32 1.0197 313.2422 321.9819 MR Image 15 1.0001 1.0514 1.0052 64.5986 72.3725 43.2033 27.00 27.00 27 1.0705 285.6100 293.2718 MR Image 16 1.0023 1.0513 1.0115 65.2455 57.2598 40.3127 27.20 27.30 45.2 1.0782 610.1797 633.1677 MR Image 17 1.0001 1.0426 0.9986 61.9274 84.0402 64.8598 31.90 31.90 5.78 1.1614 843.5427 858.127 MR Image 18 1.0001 1.0514 1.0052 64.5986 72.3725 43.2033 27.00 27.00 27 1.0800 285.4747 293.2718 MR Image 19 1.0152 0.9938 0.0471 54.2999 36.2982 23.7107 45.00 47.10 70.5 1.0605 654.8289 663.0196 MR Image 20 1.0015 1.0398 1.002 60.5071 69.1297 49.3025 39.10 39.20 19.2 1.2132 504.8591 551.1406 MR Image 21 1.0001 1.0591 1.005 67.8580 73.0390 47.1406 24.80 24.80 11.6 1.0645 744.2455 758.223 PET Image 22 1.0069 1.0530 0.4557 66.0979 50.2760 27.168 21.90 22.40 23.3 1.1157 652.3676 653.9199 PET Image 23 1.0006 0.7662 0.2325 78.9087 42.7221 28.3994 13.40 19.40 15 1.0681 452.7775 463.315 PET Image 24 1.0048 1.0709 1.0007 70.6387 84.0081 48.427 19.40 19.50 9.54 1.1340 378.8078 388.1468

Page 126: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

105

Figure 6.4 Comparison of Compression Ratio for different Input images

Figure 6.4 shows the comparison of CR for different input images. CR for

DWT algorithm is lower.

Figure 6.5 Comparison of PSNR Values for different Input Image

Figure 6.5 shows that PSNR for DWT algorithm is very low compared to

other two techniques. Therefore, it can be concluded that hybrid DWT-BP

algorithm gives better PSNR values.

Page 127: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

106

Figure 6.6 Memory expressed in kilo byte

Figure 6.6 shows the memory usage of three different algorithms

namely, DWT, NNBP and Hybrid DWT-BP. It is noticed that DWT uses lesser

memory for image compression.

Figure 6.7 Execution Time expressed in Seconds

Figure 6.7 Shows the Execution time of three different algorithms

namely, DWT, NNBP and Hybrid DWT-BP. Here, DWT produces compression

results within minimal time duration.

Page 128: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

107

Figure 6.8 (A) Results obtained for various medical images

(a). Input Images (b). DCT, (c). SPIHT and (d). Fractal algorithms

Page 129: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

108

Figure6.8 (B) Results obtained for various medical images

(a). Input Images (b). DCT, (c). SPIHT and (d). Fractal algorithms

Page 130: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

109

Figure 6.8 (C) Results obtained for various medical images

(a). Input Images (b). DCT, (c). SPIHT and (d). Fractal algorithms

Page 131: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

110

Figure 6.8 (D) Results obtained for various medical images

(a). Input Images (b). DCT, (c). SPIHT and (d). Fractal algorithms

Page 132: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

111

Figure 6.8 (E) Results obtained for various medical images

(a). Input Images (b). DCT, (c). SPIHT and (d). Fractal algorithms

Page 133: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

112

Figure 6.8(F) Results obtained for various medical images

(a). Input Images (b). DCT, (c). SPIHT and (d). Fractal algorithms

Page 134: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

113

Figures 6.8A to 6.8F show the compressed images obtained using

DWT, BPNN and Hybrid DWT-BP along with respective original images (by

simulation). All the input images are in the size of 512×512.

6.5 CONCLUSION

Image compression based on DWT, NNBP and Hybrid DWT-NNBP is

discussed. The input image of size 512×512 is given, where the compressed

image is obtained by these above algorithms. Various parameters are calculated to

know the quality of the compressed image. By viewing the comparison charts

which are given in the Figures 6.4 to 6.7, it can be concluded that (for both CR

and PSNR) among the three algorithms, Hybrid DWT-NNBP gives efficient

results.

Page 135: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

114

CHAPTER 7

A HYBRID APPROACH USING FRACTAL AND NEURAL

NETWORK RADIAL BASIS FUNCTION FOR EFFICIENT

COMPRESSION OF MULTI MODAL MEDICAL IMAGES

7.1 INTRODUCTION

Compression relates to the process of reducing the file size by

rearranging the information in the file. Compressing images is different from

zipping files. Image compression changes the system and the content of the

information within a file. Loss of the data may or may not be noticeable. The

quantity of image compression can be influenced by the type of images. The

greater compression ratio can be accomplished in the portions of the image.

Where compression is an important technique which is essential to store and

transmit an image over the long distance. An untreated image acquires more

memory to be compressed. the techniques of de compressed image to be

compressed because of the requirement of high quality image in building a video

for this loss less compression is a compression methods that permits to

reconstructed the compressed data from the original data in common it generally

two things at first it generates the statistical framework for the input information

second it matches the input information to bit sequences.

Fractal image compression is based on the fractals of various images.

The merits of converting the images to fractal data are 1) Reduced memory space

requirement of the compressed image. 2) Quantification of parameters like

Compression Ratio (CR), Peak Signal to Noise Ratio (PSNR), Bits per pixel

(Bpp) and others. The number of RBFs used to encode a sub image is too lower

than the number of data points that result in reduction of data size. RBF networks

Page 136: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

115

configure a neural network architecture that is extensively used for modeling and

controlling nonlinear systems.

In the existing fractal algorithm, CR is good but Mean Square Error

(MSE) is high and PSNR value is low. But the proposed Radial Basis Function

Neural Network (RBFNN) takes small convergence time during the training

period and produces good PSNR values. New hybrid approach combining the

working principles of Fractal and RBFNN is implemented here. Comparisons of

existing algorithms are also exhibited.

7.2 METHODOLOGIES

7.2.1 Fractal Algorithm

The term Fractal is being first used by Benoit Mandelbrot in the year

1975. It uses the original image and makes three exact copies. Fractal encoding is

generally used to encode the bit map images by using mathematical techniques. A

set of numerical information expresses the Fractal properties of image.

7.2.2 Neural Network Radial Basis for Image Compression

The idea of Radial Basis Function (RBF) Networks derives from the

theory of function approximation. We have already seen how Multi-Layer Perceptron

(MLP) networks with a hidden layer of sigmoidal units can learn to approximate

functions. RBF Networks take a slightly different approach. It has a single hidden

layer. The basic neuron model as well as the function of the hidden layer is different

from that of the output layer. The hidden layer is nonlinear but the output layer is a

linear activated function of the hidden unit which computes the euclidean distance

between the input vector and the center of that unit.

Page 137: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

116

The main features are, It has two-layer feed-forward networks and hidden nodes are

implemented a set of radial basis functions. The output nodes are implemented using

linear summation functions as in MLP. The network training is divided into two stages:

first the weights from the input to hidden layer are determined and then the weights from

the hidden to output layer. The networks are very good at interpolation.

Radial basis networks can be used to approximate functions. To add

neurons to the hidden layer of a radial basis network until it meets the specified mean

squared error goal. The larger spread is, the smoother the function approximation. Too

large a spread means a lot of neurons are required to fit a fast-changing function. Too

small a spread means many neurons are required to fit a smooth function, and the

network might not generalize well. Call new radial basis function with different spreads

to find the best value for a given problem.

Figure 7.1 General structure of NNRBF

Page 138: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

117

7.3 IMPLEMENTATION OF HYBRID TECHNIQUES

7.3.1 Hybrid image compression

Figure 7.2 Hybrid image compression using FNNRBF method

Figure 7.2 Shows the proposed Hybrid image compression using

FNNRBF. The NN-RBF algorithm is used to improve the transformation process,

which increases the edge threshold. Simultaneously, the fractal coding and NN-

RBF algorithm are combined to obtain hybrid FNNRBF coding, in order to get

better quality in image compression.

7.4 IMAGE QUALITY PARAMETER EVALUATION

Generally, for evaluating the parameters of image quality we use

MATLAB-simulating and it is verified using the mathematical equations. Two

dimensional multi modal medical images are preferred for an elaborate analysis.

Different image quality parameters are being computed from this analysis. The

image quality measures are the figures of authenticity used for the evaluation of

imaging systems. It exhibits the capability of the computation and demonstrates

the outcome. The nature of the compacted image can be measured by numerous

Page 139: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

118

parameters. The most commonly used parameters are Compression Ratio (CR),

Mean Square Error (MSE), Peak Signal to Noise Ratio Error (PSNR), Bits per

pixel (Bpp), Memory and Execution Time. In general, CR value is low and the

PSNR value is high. From these two parameters the better will be the greater

compressor.

7.5 SIMULATION RESULTS AND ANALYSIS

The image quality parameters can be represented in graphical form to

study the character of each parameter with respect to other parameters for better

understanding. The following figures (Figure 7.4 to 7.7) show the comparison

chart of CR, PSNR, Memory used and Execution time of various medical images

for methods like Fractal, NNRBF and Hybrid FNNRBF.

CR and PSNR are better with Hybrid FNNRBF. It is identifiable that

the compressed image size is much less in Hybrid FNNRBF. It is clearly stated

that the execution of time is greatly reduced by using Hybrid FNNRBF. Fractal

provides higher PSNR values with less execution time. Therefore, we can

conclude that Hybrid FNNRBF compression method is the best suited to MR,

PET and CT medical images in terms of CR and memory space. Table 7.1 Shows

CR obtained using NNRBF, Fractal and Hybrid FNNRBF.

Page 140: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

119

Table 7.1 Performance comparison of 24 medical images which are obtained by using NNRBF, Fractal and Hybrid FNNRBF

CR PSNR MEMORY EXECUTION TIME

Images Fractal NNRBF Hybrid

Fractal & NNRBF

Fractal NNRBF Hybrid

Fractal & NNRBF

Fractal NNRBF

Hybrid Fractal & NNRBF

Fractal NNRBF Hybrid

Fractal & NNRBF

CT Image 1 6.6507 1.0537 1.0424 38.2519 39.1906 34.1945 26.80 28.60 25.70 11.7500 17.4231 19.6851 CT Image 2 6.6622 0.8386 0.6958 38.7416 29.9663 29.4611 28.40 37.20 28.40 11.7969 16.4425 18.5039 CT Image 3 13.2030 1.0632 1.0482 35.8632 30.6855 31.0718 43.20 43.60 41.20 16.3281 18.6835 23.0971 CT Image 4 5.0707 1.1561 1.0581 38.9335 19.5331 20.6044 25.00 24.50 23.60 10.8594 18.5866 21.1245 MR Image 5 6.5164 0.9678 0.9345 43.6002 32.7573 33.1091 21.90 26.70 23.40 12.0000 18.5779 23.6943 MR Image 6 10.9643 1.1854 1.1463 41.7960 22.9515 23.1293 26.40 25.70 23.10 14.8125 18.2230 21.6958 MR Image 7 4.4960 1.2092 0.3788 39.0490 22.3658 16.4623 23.00 25.40 23.00 10.5625 18.2627 18.4104 MR Image 8 7.5873 1.117 1.1054 42.3063 29.9071 30.0208 25.10 27.40 22.70 12.3906 18.7018 20.8762 MR Image 9 5.1806 1.1416 0.9305 43.6769 24.6009 24.6205 20.50 21.90 22.00 10.9688 18.3615 18.9826 MR Image 10 6.9771 1.0116 1.0085 43.3527 44.3570 46.0139 23.70 25.90 23.50 12.1094 18.6340 20.9603 MR Image 11 6.5397 0.9904 0.9703 43.0880 44.2258 44.5761 23.00 25.80 23.00 11.9219 18.7667 21.6570 MR Image 12 3.4217 0.956 0.9271 40.7357 29.3755 31.4664 22.30 26.30 22.30 9.8438 17.8159 19.7808 MR Image 13 1.8435 1.0728 0.8469 43.4892 50.9330 44.3137 17.80 17.30 21.00 9.0625 17.9883 20.0986 MR Image 14 2.3289 1.1096 0.7947 41.3458 35.7803 27.3508 15.90 71.50 15.90 9.0938 17.2854 18.6382 MR Image 15 3.0478 1.0365 0.9788 39.6035 46.2050 42.7904 20.50 27.40 20.50 9.4844 17.7939 18.6295 MR Image 16 3.4444 1.0499 0.8910 39.9027 32.9302 28.8155 22.70 27.30 22.70 10.0625 18.7660 20.1662 MR Image 17 4.8070 0.9978 0.9794 39.0449 37.5945 37.4232 29.90 33.30 29.90 10.7031 26.7360 20.4192 MR Image 18 3.0478 1.0365 0.9788 3.0478 1.0365 0.9788 20.50 27.40 20.50 9.5000 27.1599 18.9999 MR Image 19 3.4270 1.1379 0.8874 3.4270 1.1379 0.8874 28.20 41.40 28.20 10.1094 17.9907 19.0742 MR Image 20 4.1237 1.003 0.9852 34.1952 38.2271 38.9271 32.70 40.60 33.20 10.5156 18.1835 20.2288 MR Image 21 6.2566 1.0636 0.6189 43.3019 32.7369 26.1410 20.60 22.90 20.60 11.5313 18.5056 18.7640 PET Image 22 5.5154 1.0352 1.0080 43.3578 33.4930 32.3096 19.20 22.80 19.10 11.2656 18.8691 18.8583 PET Image 23 1.4721 0.7923 0.3612 43.9157 24.4163 21.6748 14.30 18.70 39.60 8.5625 16.5292 19.5478 PET Image 24 1.1946 1.0696 1.0687 39.3448 52.1345 52.1528 18.80 19.60 17.50 8.8125 16.7909 20.1511

Page 141: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

120

Figure 7.3 Compression Ratio expressed in percentage

Figure 7.3 shows the compression ratio of three different algorithms namely

Neural Network Radial Basis Function, Fractal and Hybrid Fractal & NNRBF. It is

clearly evident that Hybrid Fractal & NNRBF provides better CR values.

Figure 7.4 PSNR expressed in decibel

Figure 7.4 Shows the PSNR for three different algorithms NNRBF, Fractal and

Hybrid Fractal & NNRBF. Here Hybrid Fractal & NNRBF provides better PSNR values.

Page 142: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

121

Figure 7.5 Memory expressed in kilo byte

Figure 7.5 shows the memory usage of three different algorithms namely,

NNRBF, Fractal and Hybrid Fractal & NNRBF. It is noticed that Hybrid Fractal &

NNRBF uses lesser memory for image compression.

Figure 7.6 Execution Time expressed in Seconds

Figure 7.6 Shows the Execution time of three different algorithms namely,

NNRBF, Fractal and Hybrid Fractal & NNRBF. Here, Hybrid Fractal & NNRBF

produces compression results within minimal time duration.

Page 143: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

122

Figure 7.7(A) Results obtained for various medical images

(a). Input Images (b). Fractal, (c). NNRBF and (d). Hybrid Fractal & NNRBF algorithms

Page 144: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

123

Figure 7.7(B) Results obtained for various medical images (a). Input Images (b). Fractal, (c). NNRBF and (d). Hybrid Fractal & NNRBF

algorithms

Page 145: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

124

Figure 7.7(C) Results obtained for various medical images

(a). Input Images (b). Fractal, (c). NNRBF and (d). Hybrid Fractal & NNRBF algorithms

Page 146: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

125

Figure 7.7(D) Results obtained for various medical images

(a). Input Images (b). Fractal, (c). NNRBF and (d). Hybrid Fractal & NNRBF algorithms

Page 147: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

126

Figure 7.7(E) Results obtained for various medical images

(a). Input Images (b). Fractal, (c). NNRBF and (d). Hybrid Fractal & NNRBF algorithms

Page 148: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

127

Figure 7.7(F) Results obtained for various medical images

(a). Input Images (b). Fractal, (c). NNRBF and (d). Hybrid Fractal & NNRBF algorithms

Page 149: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

128

Figures 7.7 (A-F) (a) Input images, (b) output obtained from Fractal, (c) output

obtained from NNRBF and (d) output obtained from Hybrid Fractal & NNRBF

compression algorithms.

7.6 CONCLUSION

The standard prior techniques have been studied in the past to overcome the

issues of compressing medical images. After reviewing the prior studies, it can be said

that performance of medical image compression is highly dependent on the compression

ratio as well as perceptible quality of a compressed image. Compressed image with better

perceptual quality will retain the information of the clinical importance to higher degree

and will aid the diagnostic to have better result. Although a plenty of research work has

been carried out in the past for exploring an efficient solution for performing medical

image compression. In the existing Fractal algorithm, Compression Ratio is efficient but

Mean Square error is high, wherein PSNR value becomes low. Neural Network Radial

Basis Function takes very less convergence time during training period. Implementation

of a new radial basis function network based on neural network scheme and the

comparison with existing algorithms is carried out. The proposed Hybrid Fractal with

NNRBF based image compression method undergoes better compression ratio and PSNR

values. The proposed algorithm requires minimum time duration and reduced memory

space for performing image compression. Image compression is provided by Fractal,

Radial Basis Function Neural Network and Hybrid Fractal & NNRBF algorithms. Hybrid

FNNRBF provides compressed images with lower CR in minimized time duration on

comparison with Fractal and NNRBF algorithms. The soft computing techniques

proposed through this research have been considered in image compression and have

produced better image quality on comparison with analytical and statistical algorithms.

Three different approaches such as Fractal, Radial Basis Function Neural Network and

Hybrid Fractal & NNRBF are applied to medical image compression and compared.

Page 150: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

129

Here, MR and CT images are compared using quality parameters such as CR and PSNR,

Execution time and Memory usage. It is observed that FNNRBF method has low CR and

high PSNR values. Hybrid Fractal & NNRBF is found to be more efficient than Fractal

and NNRBF methods.

Page 151: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

130

CHAPTER 8

CONCLUSION AND FUTURE WORK

8.1 CONCLUSION

In brief, the following conclusions are made:

• Image compression process is carried out using Fractal, Discrete Cosine

Transform, Discrete Wavelet Transform, Set Partitioning in Hierarchical Trees,

Neural Network Back Propagation, Neural Network Radial Basis Function,

Hybrid Discrete Wavelet Transform with Neural Network Back Propagation

and Hybrid Fractal with Neural Network Radial Basis Function methods.

• The performances of conventional methods are found to be dissatisfied for

MRI, CT and PET image compression.

• The performance of Neural Network Back Propagation and Neural Network

Radial Basis Function for compression process is improved by means of

introducing hybrid technology.

• Neural Network Radial Basis Function algorithm is effective in obtaining

better compression results of various CT, MRI and PET images.

• Hybrid Fractal with Neural Network Radial Basis Function based image

compression is found to be better when compared to conventional compression

methods.

Page 152: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

131

Table 8.1 Shows Compression ratio of 24 medical images which are obtained by using

DCT,DWT,Fractal, NNBP, NNRBF, Hybrid Fractal and NNRBF and Hybrid DWT-NNBP

algorithm •

Images DCT DWT Fractal NNBP NNRBF Hybrid

Fractal & NNRBF

Hybrid DWT-NNBP

CT Image 1 1.4088 1.0002 6.6507 1.0495 1.0537 1.0424 1.0005 CT Image 2 1.3442 1.0011 6.6622 1.0356 0.8386 0.6958 1.0121 CT Image 3 1.4679 1.0015 13.2030 1.0330 1.0632 1.0482 1.0016 CT Image 4 1.3443 1.0020 5.0707 1.0328 1.1561 1.0581 0.568 MR Image 5 1.4402 1.0005 6.5164 1.0564 0.9678 0.9345 0.9254 MR Image 6 1.6127 1.0048 10.9643 1.0423 1.1854 1.1463 0.8844 MR Image 7 1.3586 1.0229 4.4960 1.0398 1.2092 0.3788 0.0969 MR Image 8 1.5927 1.0265 7.5873 1.0442 1.117 1.1054 0.5601 MR Image 9 1.5900 1.0144 5.1806 1.0208 1.1416 0.9305 0.0999 MR Image 10 1.2792 1.0001 6.9771 1.0591 1.0116 1.0085 1.005 MR Image 11 1.3006 0.9999 6.5397 1.0605 0.9904 0.9703 0.5603 MR Image 12 1.4355 1.0073 3.4217 1.0536 0.956 0.9271 0.8914 MR Image 13 1.4994 1.0013 1.8435 1.0837 1.0728 0.8469 0.9743 MR Image 14 1.3451 1.0006 2.3289 1.0728 1.1096 0.7947 0.0518 MR Image 15 1.6541 1.0001 3.0478 1.0514 1.0365 0.9788 1.0052 MR Image 16 1.5732 1.0023 3.4444 1.0513 1.0499 0.8910 1.0115 MR Image 17 1.4197 1.0001 4.8070 1.0426 0.9978 0.9794 0.9986 MR Image 18 1.6541 1.0001 3.0478 1.0514 1.0365 0.9788 1.0052 MR Image 19 1.7575 1.0152 3.4270 0.9938 1.1379 0.8874 0.0471 MR Image 20 1.0884 1.0015 4.1237 1.0398 1.003 0.9852 1.002 MR Image 21 1.2792 1.0001 6.2566 1.0591 1.0636 0.6189 1.005 PET Image 22 1.5891 1.0069 5.5154 1.0530 1.0352 1.0080 0.4557 PET Image 23 1.4908 1.0006 1.4721 0.7662 0.7923 0.3612 0.2325 PET Image 24 1.5791 1.0048 1.1946 1.0709 1.0696 1.0687 1.0007

Page 153: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

132

Table 8.2 Shows Peak Signal to Noise Ratio of 24 medical images which are obtained

by using DCT, DWT, Fractal, NNBP, NNRBF, Hybrid Fractal and NNRBF and

Hybrid DWT-NNBP algorithm •

Images DCT DWT Fractal NNBP NNRBF Hybrid Fractal

-NNRBF Hybrid DWT-NNBP

CT Image 1 88.8087 71.7728 38.2519 69.6090 39.1906 34.1945 61.3201 CT Image 2 88.3888 71.3010 38.7416 42.3719 29.9663 29.4611 39.1269 CT Image 3 85.8091 55.7033 35.8632 62.5396 30.6855 31.0718 47.2912 CT Image 4 92.4192 67.3440 38.9335 40.2157 19.5331 20.6044 29.4004 MR Image 5 90.5865 67.7753 43.6002 62.0237 32.7573 33.1091 38.5935 MR Image 6 91.8776 63.8786 41.7960 46.7436 22.9515 23.1293 31.9976 MR Image 7 92.2370 62.7552 39.0490 44.7684 22.3658 16.4623 29.4417 MR Image 8 97.7195 60.1968 42.3063 48.5713 29.9071 30.0208 23.6815 MR Image 9 101.4771 64.2066 43.6769 45.7868 24.6009 24.6205 25.814 MR Image 10 90.5101 67.8580 43.3527 73.0390 44.3570 46.0139 47.1406 MR Image 11 90.6810 68.4217 43.0880 68.0856 44.2258 44.5761 43.8376 MR Image 12 99.6740 67.9670 40.7357 54.1458 29.3755 31.4664 38.7958 MR Image 13 114.2706 72.8183 43.4892 66.8312 50.9330 44.3137 38.0442 MR Image 14 97.6726 72.5798 41.3458 47.6965 35.7803 27.3508 24.6016 MR Image 15 98.4225 64.5986 39.6035 72.3725 46.2050 42.7904 43.2033 MR Image 16 95.1357 65.2455 39.9027 57.2598 32.9302 28.8155 40.3127 MR Image 17 96.8298 61.9274 39.0449 84.0402 37.5945 37.4232 64.8598 MR Image 18 98.4225 64.5986 3.0478 72.3725 1.0365 0.9788 43.2033 MR Image 19 90.0821 54.2999 3.4270 36.2982 1.1379 0.8874 23.7107 MR Image 20 91.1779 60.5071 34.1952 69.1297 38.2271 38.9271 49.3025 MR Image 21 90.5101 67.8580 43.3019 73.0390 32.7369 26.1410 47.1406 PET Image 22 97.2405 66.0979 43.3578 50.2760 33.4930 32.3096 27.168 PET Image 23 112.8808 78.9087 43.9157 42.7221 24.4163 21.6748 28.3994 PET Image 24 112.1955 70.6387 39.3448 84.0081 52.1345 52.1528 48.427

Page 154: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

133

Table 8.3 Shows Memory of 24 medical images which are obtained by using DCT,

DWT, Fractal, NNBP, NNRBF, Hybrid Fractal and NNRBF and Hybrid DWT-

NNBP algorithm •

Images DCT DWT Fractal NNBP NNRBF Hybrid

Fractal & NNRBF

Hybrid DWT-NNBP

CT Image 1 21.40 28.70 26.80 28.70 28.60 25.70 24.1 CT Image 2 23.20 29.80 28.40 30.10 37.20 28.40 17.2 CT Image 3 31.60 44.80 43.20 44.90 43.60 41.20 30.3 CT Image 4 21.10 26.90 25.00 27.40 24.50 23.60 26.9 MR Image 5 18.00 24.50 21.90 24.50 26.70 23.40 51.4 MR Image 6 18.90 28.90 26.40 29.20 25.70 23.10 70.5 MR Image 7 22.60 28.60 23.00 29.50 25.40 23.00 49.6 MR Image 8 19.20 28.40 25.10 29.30 27.40 22.70 44.9 MR Image 9 15.70 23.30 20.50 24.50 21.90 22.00 55.5 MR Image 10 20.50 24.80 23.70 24.80 25.90 23.50 11.6 MR Image 11 19.60 24.10 23.00 24.10 25.80 23.00 10.3 MR Image 12 17.50 23.70 22.30 23.90 26.30 22.30 22.1 MR Image 13 12.40 17.10 17.80 17.10 17.30 21.00 17.8 MR Image 14 14.40 18.00 15.90 18.10 71.50 15.90 3.32 MR Image 15 17.20 27.00 20.50 27.00 27.40 20.50 27 MR Image 16 18.20 27.20 22.70 27.30 27.30 22.70 45.2 MR Image 17 23.40 31.90 29.90 31.90 33.30 29.90 5.78 MR Image 18 17.20 27.00 20.50 27.00 27.40 20.50 27 MR Image 19 26.80 45.00 28.20 47.10 41.40 28.20 70.5 MR Image 20 27.40 39.10 32.70 39.20 40.60 33.20 19.2 MR Image 21 20.50 24.80 20.60 24.80 22.90 20.60 11.6 PET Image 22 14.80 21.90 19.20 22.40 22.80 19.10 23.3 PET Image 23 9.97 13.40 14.30 19.40 18.70 39.60 15 PET Image 24 13.20 19.40 18.80 19.50 19.60 17.50 9.54

Page 155: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

134

Table 8.4 Shows Execution time of 24 medical images which are obtained by using

DCT, DWT, Fractal, NNBP, NNRBF, Hybrid Fractal and NNRBF and Hybrid DWT-

NNBP algorithm

Images DCT DWT Fractal NNBP NNRBF Hybrid Fractal & NNRBF Hybrid DWT-NNBP

CT Image 1 0.7572 1.0867 11.7500 850.4214 17.4231 19.6851 848.0286 CT Image 2 0.7645 1.0818 11.7969 278.7227 16.4425 18.5039 279.6484 CT Image 3 0.7522 2.3126 16.3281 542.2186 18.6835 23.0971 546.848 CT Image 4 0.8272 1.6483 10.8594 935.6339 18.5866 21.1245 938.7183 MR Image 5 0.9514 1.1104 12.0000 387.0713 18.5779 23.6943 354.3677 MR Image 6 0.8773 1.1089 14.8125 667.3835 18.2230 21.6958 668.1684 MR Image 7 0.7620 1.2349 10.5625 658.0410 18.2627 18.4104 656.5766 MR Image 8 1.5385 1.0894 12.3906 1164.2615 18.7018 20.8762 874.3936 MR Image 9 0.7825 1.0646 10.9688 836.9937 18.3615 18.9826 798.0145 MR Image 10 1.1055 1.1229 12.1094 751.2015 18.6340 20.9603 786.3682 MR Image 11 0.8540 1.1197 11.9219 294.9351 18.7667 21.6570 287.1838 MR Image 12 0.7832 1.1865 9.8438 415.8369 17.8159 19.7808 425.006 MR Image 13 0.7919 1.0898 9.0625 497.0650 17.9883 20.0986 504.9017 MR Image 14 1.4770 1.0197 9.0938 313.2422 17.2854 18.6382 321.9819 MR Image 15 0.7612 1.0705 9.4844 285.6100 17.7939 18.6295 293.2718 MR Image 16 0.8803 1.0782 10.0625 610.1797 18.7660 20.1662 633.1677 MR Image 17 1.0694 1.1614 10.7031 843.5427 26.7360 20.4192 858.127 MR Image 18 0.9104 1.0800 9.5000 285.4747 27.1599 18.9999 293.2718 MR Image 19 1.0884 1.0605 10.1094 654.8289 17.9907 19.0742 663.0196 MR Image 20 1.2724 1.2132 10.5156 504.8591 18.1835 20.2288 551.1406 MR Image 21 0.7581 1.0645 11.5313 744.2455 18.5056 18.7640 758.223 PET Image 22 0.8768 1.1157 11.2656 652.3676 18.8691 18.8583 653.9199 PET Image 23 0.7791 1.0681 8.5625 452.7775 16.5292 19.5478 463.315 PET Image 24 1.0188 1.1340 8.8125 378.8078 16.7909 20.1511 388.1468

Medical Image Compression is provided by Fractal, DCT, DWT,

SPIHT, NNBP, NNRBF, Hybrid DWT-NNBP and Hybrid Fractal-NNRBF

algorithms. Among the above said algorithms, the comparison of results show

that Hybrid Fractal-NNRBF produce compression images with better CR, PSNR

and bandwidth required to save the image. Soft computing techniques for image

Page 156: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

135

compression have produced better image quality when compared to analytical and

the statistical algorithms. This proves that Hybrid Fractal-NNRBF algorithm is

commendable and this research concludes that high quality compression of CT,

MRI and PET image is offered through Hybrid Fractal with Neural Network

Radial Basis Function method.

8.2 FUTURE WORK

• The Hybrid-Neural Network based method can be combined with other

evolutionary methods and Level set methods to give better compression

results.

• The extension of this application can be suggested for other images for

compression and in the analysis of real time diagnosis.

• The methodologies used in this research can also be extended for the

compression of images pertaining to Oncology.

Page 157: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

136

REFERENCES

[1] Abdul Khader Jilani Saudagar and Omar A. Shathry, Neural Network Based Image

Compression Approach to Improve the Quality of Biomedical Image for

Telemedicine, British Journal of Applied Science & Technology, 4(3), (2014), 510-

524.

[2] Abirami.J, Siva sankari.S and Narashiman.K, Image compression based on

WaveletSupport Vector Machine Kernels. International Journal of Engineering and

Technology (IJET), 5(2), (2013), 1584-1588.

[3] Adnan Khashman and Kamil Dilmililer, Medical Radiographs Compression using

Neural Networks and Haar Wavelet, IEEE Transaction, (2009), 1448-1453.

[4] Ajay Kumar Bhagat and Er. Dipti Bansal, Image Fusion Using Hybrid Method with

Singular Value Decomposition and Wavelet Transfor, International Journal of

Emerging Technology and Advanced Engineering, 4, (2014), 827-830.

[5] Alagendran B, Manimurugan S, A Survey on Various Medical Image Compression

Techniques. International Journal of Soft Computing and Engineering (IJSCE), 2(1),

(2012), 425-428.

[6] Alex Alexandridis, Eva Chondrodima, and Haralambos Sarimveis, Radial Basis

Function Network Training Using a Nonsymmetric Partition of the Input Space and

Particle Swarm Optimization. IEEE Transactions on Neural Networks and Learning

Systems, 24(2), (2013), 219-230.

[7] Al-Fahoum A and Harb B, A Combined Fractal and Wavelet Angiography Image

Compression Approach, The Open Medical Imaging Journal, 7, (2013), 9-18.

[8] Ali Al-Fayadh, Abir Jaafar Hussain, Paulo Lisboa, Dhiya AI- Jumeily and M.AI-

Jumaily, A Hybrid Image Compression Method and Its Application to Medical

Images, Second International Conference on Developments in eSystems Engineering,

(2009),107-112.

Page 158: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

137

[9] Alok kumar singh, G.S. Tripathi, A Comparative Study of DCT, DWT & Hybrid

(DCT-DWT) Transform, GJESR Review Paper, 1(4), (2014), 2349-2355.

[10] Andrew Martchenko and Guang Deng, Bayesian Predictor Combination for Lossless

Image Compression, IEEE Transactions on Image Processing, 22(12), (2013), 5263-

5270.

[11] Anil Bhagat, Balasaheb Deokate, Improve Image Quality At Low Bitrate Using

Wavelet Based Fractal Image Coder, International Journal of Advanced Research in

Electrical, Electronics and Instrumentation Engineering , 2(8), (2013), 3693-3702.

[12] Anjana Jianyu Lin and Mark J. T. Smith, Two-Band Hybrid FIR–IIR Filters for

Image Compression, IEEE Transactions on Image Processing, 20(11), (2011),

3063-3072.

[13] Ankita Vaish and Manoj Kuma, A new Image Compression Technique using

Principal Component Analysis and Huffman Coding. International Conference on

Parallel, Distributed and Grid Computing, (2014), 301-305.

ANN and DWT, International Journal of Computer Applications, 95(11), (2014),

35-38.

[14] Anna Durai, and E. Anna Saro, Image Compression with Back-Propagation Neural

Network using Cumulative Distribution Function, World Academy of Science,

Engineering and Technology, 2, (2008), 802-806.

[15] Arif Sameh Arif, Sarina Mansor, Hezrul abdul karim and Rajasvaran Logeswaran,

Lossless Compression of Fluoroscopy Medical Images using Correlation and the

Combination of Run-length and Huffman Coding, IEEE EMBS International

Conference on Biomedical Engineering and Sciences, (2012), 759-762.

[16] Arun Vikas Singh and Srikanta Murthy K, Neuro-Wavelet based Efficient Image

Compression using Vector Quantization, International Journal of Computer

Applications, 49(3), (2012), 33-40.

[17] Arunpriya C. and Antony Selvadoss Thanamani, An Effective Tea Leaf Recognition

Algorithm for Plant Classification Using Radial Basis Function Machine,

Page 159: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

138

International Journal of Modern Engineering Research (IJMER), 4(3), (2014),35-

44.

[18] Bairagi and A.M. Sapkal, Automated region-based hybrid compression for

[19] Bhammar and Prof. K.A. Mehta, Survey of Various Image Compression Techniques,

International Journal of Darshan Institute on Engineering Research & Emerging

Technologies, 1(1), (2012), 85-90.

[20] Birendra Kumar Patel, Prof. Suyash Agrawal, Image Compression Techniques Using

Artificial Neural Network, International Journal of Advanced Research in Computer

Engineering & Technology (IJARCET), 2(10), (2013), 2725-2729.

[21] Chakrapani and K. Soundera Rajan, Implementation of Fractal Image Compression

Employing Hybrid Genetic-Neural Approach. International Journal Of

Computational Cognition, 7(3), (2009), 33-39.

[22] Chander mukhi, Pallavi Nayyar, Mandeep Singh Saini, Improved Image

Compression using Hybrid Transform. International Journal of Science and

Research (IJSR), 2(1), (2013), 316-319.

[23] Chaudhari and S. B. Dhok, Wavelet Transformed based Fast Fractal Image

Compression, International Conference on Circuits, Systems, Communication and

Information Technology Applications (CSCITA), (2014), 65-69.

[24] Chiang and L.M.Po. Adaptive lossy LZW algorithm for palettised image

compression. Electronics Letters, 33, (1997),10.

[25] Christophe Amerijckx, Michel Verleysen, Philippe Thissen, and Jean-Didier Legat,

Image Compression by Self-Organized Kohonen Map. IEEE Transactions on Neural

Networks, 9(3), (1998), 503-507.

[26] Chunlei Jiang and Shuxin Yin, A Hybrid Image Compression Algorithm Based on

Human Visual System, International Conference on Computer Application and

System Modeling (ICCASM 2010), 9, (2010), 170-173.

[27] Ci Wang, Hong-Bin Yu, and Meng Zheng, A DCT-based MPEG-2 Transparent

Scrambling Algorithm, IEEE Transactions on Consumer Electronics, 49(4), (2003),

1208-1213.

Page 160: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

139

[28] Debin Zhao, Wen Gao, and Y. K. Chan, Morphological Representation of DCT

Coefficients for Image Compression. IEEE Transactions On Circuits And Systems

For Video Technology, 12(9), (2002), 819-823.

digital imaging and communications in medicine magnetic resonance imaging

images for telemedicine applications, IET Sci. Meas. Technol, 6(4), (2012), 247-

253.

[29] Dipta Pratim Dutta, Samrat Deb Choudhury, Md. Anwar Hussain, and Swanirbhar

Majumder, Digital Image Compression using Neural Networks, International

Conference on Advances in Computing, Control, and Telecommunication

Technologies, (2009), 116-120.

[30] Ferni Ukrit and G.R.Suresh, Hybrid Algorithm for Medical Image Sequences using

Super-Spatial Structure Prediction with LZ8. International Journal of Computer

Applications, 86(11), (2014), 10-15.

[31] Harjeetpal singh and Sakhi Sharma, Hybrid Image Compression Using DWT, DCT

& Huffman Encoding Techniques, International Journal of Emerging Technology

and Advanced Engineering, 2(10), (2012), 300-306.

[32] Jaffar Iqbal Barbhuiya, Tahera Akhtar Laskar and K. Hemachandran, An Approach

for Color Image Compression of JPEG and PNG Images using DCT And DWT.

Sixth International Conference on Computational Intelligence and Communication

Networks, (2014), 129-133.

[33] Jagadish H. Pujar and Lohit M. Kadlaskar, A New Lossless Method Of Image

Compression And Decompression Using Huffman Coding Techniques. Journal of

Theoretical and Applied Information Technology, (2010), 18-23.

[34] Jiaji Wu, Chong Liang, Jianxiang Han, Zejun Hu, Dehong Huang, Hongqiao Hu,

Yong Fang, Licheng Jiao, Two-stage lossless compression algorithm for aurora

image using weighted motion compensation and context-based model, Elsevier

Journal, 290, (2013), 19-27.

Page 161: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

140

[35] Jianji Wang, and Nanning Zheng, A Novel Fractal Image Compression Scheme with

Block Classification and Sorting Based on Pearson’s Correlation Coefficient. IEEE

Transactions on Image Processing, 22(9), (2013), 3690-3702.

[36] Jian-Jiun Ding, Hsin-Hui Chen, and Wei-Yi Wei, Adaptive Golomb Code for Joint

Geometrically Distributed Data and Its Application, Image Coding. IEEE

Transactions On Circuits And Systems For Video Technology, 23(4), (2013), 661-

670.

[37] Jie Liang, Trac D. Tran and Ricardo L. de Queiroz, DCT-Based General Structure

for Linear-Phase Paraunitary Filterbanks. IEEE Transactions On Signal Processing,

51(6), (2003), 1572-1580.

[38] Jonathan Taquet and Claude Labit, Hierarchical Oriented Predictions for Resolution

Scalable Lossless and Near-Lossless Compression of CT and MRI Biomedical

Images, IEEE Transactions on Image Processing, 21(5), (2012), 2641-2652.

[39] Jyh-Horng Jeng, Chun-Chieh Tseng, and Jer-Guang Hsieh, Study on Huber Fractal

Image Compression, IEEE Transactions on Image Processing, 18(5), (2009), 995-

1003.

[40] Kai-jen Cheng, and Jeffrey Dill, Lossless to Lossy Dual-Tree BEZW Compression

for Hyperspectral Images, IEEE Transactions on Geoscience and Remote Sensing,

52(9), (2014), 5765-5770.

[41] Kaur, R.C. Chauhan and S.C. Saxena, Adaptive compression of medical ultrasound

images, IEE Proc.-Vis. Image Signal Process., 153(2), (2006), 185-190.

[42] Kesavamurthy Thangavelu, Thiyagarajan Krishnan, Lossless Color Medical Image

Compression using Adaptive Block-Based Encoding for Human Computed

Tomographic Images, Wiley Periodicals, Inc, 23, (2013), 227-234.

[43] Kuppusamy, R.Ilackiya, Fractal Image Compression & Algorithmic Techniques,

International Journal of Computer & Organization Trends, 3(4), (2013), 141-145.

[44] Leyuan Fang, Shutao Li, Xudong Kang, Joseph A. Izatt, and Sina Farsiu, 3-D

Adaptive Sparsity Based Image Compression With Applications to Optical

Page 162: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

141

Coherence Tomography. IEEE Transactions on Medical Imaging, 34(6), (2015),

1306-1320.

[45] Long Zhang, KangLi, HaiboHe and George W. Irwin, A New Discrete-Continuous

Algorithm for Radial Basis Function Networks Construction. IEEE Transactions on

Neural Networks and Learning Systems, 24(11), (2013), 1785-1798.

[46] Merav Huber-Lerner, Ofer Hadar, Stanley R. Rotman, and Revital Huber-Shalem,

Compression of Hyperspectral Images Containing a Subpixel Target. IEEE Journal

Of Selected Topics In Applied Earth Observations And Remote Sensing, 7(6), (2014),

2246-2255.

[47] Mohamed Abo-Zahhad, Reda Ragab Gharieb, Sabah M. Ahmed, Mahmoud Khaled

Abd-Ellah, Huffman Image Compression Incorporating DPCM and DWT, Journal of

Signal and Information Processing, 6, (2015), 123-135.

[48] Mohamed El Zorkany, A Hybrid Image Compression Technique Using Neural

Network and Vector Quantization With DCT, Springer International Publishing

Switzerland, 5, (2014), 233-244.

[49] Monika Narwal and Er.Tarun Jeet Singh Chugh, Image Compression By Hybrid

Transformation Technique, International Journal of Advanced Research in

Computer Science and Software Engineering, 3(7), (2013), 474-479.

[50] Ng and L.M. Cheng, Data re-ordering technique for lossless LZW continuous-tone

image compression. Electronics Letters, 35, (1999), 20.

[51] Nikita Bansal and Sanjay Kumar Dubey, Image Compression Using Hybrid

Transform Technique, Journal of Global Research in Computer Science, 4(1),

(2013) 13-17.

[52] Omar Arif and Patricio Antonio Vela, Kernel Map Compression using Generalized

Radial Basis Functions, IEEE 12th International Conference on Computer Vision

(ICCV), (2009), 1119-1124.

[53] Panda, M.S.R.S Prasad, MNM Prasad, Ch. SKVR Naidu, Image Compression Using

Back Propagation Neural Network, International Journal Of Engineering Science &

Advanced Technology, 2(1), (2012), 74-78.

Page 163: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

142

[54] Patil and A.R.Yardi, ANN based Dementia Diagnosis using DCT for Brain MR

Image compression. International conference on Communication and Signal

Processing, (2013), 451-454.

[55] Pawel Turcza and Mariusz Duplaga, Hardware-Efficient Low-Power Image

Processing System for Wireless Capsule Endoscopy, IEEE Journal Of Biomedical

And Health Informatics, 17(6), (2013),1046-1056.

[56] Praisline Jasmi, Mr.B.Perumal and Dr.M.Pallikonda Rajasekaran, Comparison of

Image Compression Techniques using Huffman Coding, DWT and Fractal

Algorithm, International Conference on Computer Communication and Informatics

(ICCCI), (2015).

[57] Prema Karthikeyan, Narayanan Sreekumar, A Study on Image Compression with

Neural Networks Using Modified Levenberg Maruardt Method, Global Journal of

Computer Science and Technology, 11(3), (2011).

[58] Priya Pareek, Manish Shrivastava, An Image Compression Using Multilayer Wavelet

Transform with 2DTCWT: A Review, International Journal of Computer

Applications, 102(1), (2014), 13-17.

[59] Renato J. Cintra, and Fábio M. Bayer, A DCT Approximation for Image

Compression. IEEE Signal Processing Letters, 18(10), (2011), 579-582.

[60] Reny Catherin L, Thirupurasunthari P, Sherley Arcksily Sylvia A, Sravani Kumari

G, Joany R.M and N.M. Nandhitha, A Survey on Hybrid Image Compression

Techniques for Video Transmission. International Journal of Electronics and

Communication Engineering, 6(3), (2013), 217-224.

[61] Robina Ashraf and Muhammad Akbar, Absolutely Lossless Compression of Medical

Images, Engineering in Medicine and Biology 27th Annual Conference, (2005),

4006-4009.

[62] Sridhar M.I.S.T.E, V.Venugopal, S. Ramesh, Srinivas nd Sk. Mansoob, Wavelets

and Neural Networks based Hybrid Image Compression Scheme, International

Journal of Emerging Trends & Technology in Computer Science (IJETTCS), 2,

(2013), 195-200.

Page 164: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

143

[63] Saravanan and R. Ponalagusamy, Lossless Grey-scale Image Compression using

Source Symbols Reduction and Huffman Coding, International Journal of Image

Processing (IJIP), 3, (2010), 246-251.

[64] Seyun Kimand, Nam Ik Cho, Hierarchical Prediction and Context Adaptive Coding

for Lossless Color Image Compression, IEEE Transactions On Image Processing,

23(1), (2014), 445-449.

[65] Shaou-Gang Miaou, Fu-Sheng Ke, and Shu-Ching Chen, A Lossless Compression

Method for Medical Image Sequences Using JPEG-LS and Interframe Coding, IEEE

Transactions on Information Technology in Biomedicine, 13(5), (2009), 818-821.

[66] Shiqiang Yan and Zhong Xiao, Application of BP Neural Network with Chebyshev

Mapping in Image Compression, International Conference on Instrumentation,

Measurement, Computer, Communication and Control, (2013), 398-402.

[67] Shruti Puniani, Er. Nishi Madaan, Various Image Compression Techniques: A

Review, International Journal of Advanced Research in Computer Engineering &

Technology (IJARCET), 3(4), (2014), 1164-1170.

[68] Sophin Seeli , Dr.M.K.Jeyakumar, A Study on Fractal Image Compression using

Soft Computing Techniques, IJCSI International Journal of Computer Science

Issues, 9(2), (2012), 420-430.

[69] Sridevi, Dr.V.R.Vijayakuymar and Ms.R.Anuja, A Survey on Various Compression

Methods for Medical Images. I.J. Intelligent Systems and Applications, 3, (2012),

13-19.

[70] Sridharan Bhavani, Kepanna Gowder Thanushkodi, Comparison of fractal coding

methods for medical image compression, IET Image Process, 7(7), (2013), 686-693.

[71] Taha mohammed Hasan and Xingqian Wu, An Adaptive Fractal Image

Compression., IJCSI International Journal of Computer Science Issues, 10(1),

(2013), 98-108.

[72] Tajallipour and Khan Wahid, Efficient Implementation of Adaptive LZW Algorithm

for Medical Image Compression. International Conference on Computer and

Information Technology, (2011), 987.

Page 165: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

144

[73] Tamilarasi and Dr.V.Palanisamy, Contourlet Based Medical Image Compression

Using Improved EZW, International Conference on Advances in Recent

Technologies in Communication and Computing, (2009), 800-804.

[74] Tiruvenkadam Santhanam and A.C. Subhajini, An Efficient Weather Forecasting

System using Radial Basis Function Neural Network, Journal of Computer Science,

7(7), (2011), 962-966.

[75] Václav Šimek and Ram Rakesh ASN, GPU Acceleration of 2D-DWT Image

Compression in MATLAB with CUDA. Second UKSIM European Symposium on

Computer Modeling and Simulation, (2008), 274-277.

[76] Vasanthi Kumari P and K Thanushkodi, Image Compression Using Wavelet

Transform and Graph Cut Algorithm. Journal of Theoretical and Applied

Information Technology, 53(3), (2013), 437-445.

[77] Vidhya and S. Shenbagadevi, Medical Image Compression Using Hybrid Coder

With Fuzzy Edge Detection, ICTACT Journal on Image and Video Processing, 1(3),

(2011), 138-142.

[78] Vilas Gaidhane, Vijander Singh and Mahendra Kumar, Image Compression using

PCA and Improved Technique with MLP Neural Network, International Conference

on Advances in Recent Technologies in Communication and Computing, (2010),

106-110.

[79] Vilas H. Gaidhane, Vijander Singh, Yogesh V. Hote and Mahendra Kumar, New

Approaches for Image Compression Using Neural Network, Journal of Intelligent

Learning Systems and Applications, 3, (2011), 220-229.

[80] Xiaofeng Li and Yi Shen, A Medical Image Compression Scheme Based on Low

Order Linear Predictor and Most-likely Magnitude Huffman Code. International

Conference on Mechatronics and Automation, (2006),1796-1800.

[81] Yeo, David F. W. Yap, T.H. Oh, D.P. Andito, S. L. Kok, Y. H. Ho, M. K. Suaidi,

Grayscale Medical Image Compression using Feedforward Neural Networks,

International Conference on Computer Applications and Industrial Electronics,

(2011), 633-638.

Page 166: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

145

[82] Yongfei Zhang, Haiheng Cao, Hongxu Jiang and Bo Li, Visual Distortion Sensitivity

Modeling for Spatially Adaptive Quantization in Remote Sensing Image

Compression, IEEE Geoscience And Remote Sensing Letters, 11(4), (2014), 723-

727.

[83] Yongjian Nian, MiH and Jianwei Wan, Distributed near lossless compression

algorithm for hyperspectral images, Elsevier Journal, 40, (2014), 1006-1014.

[84] Yung-Gi Wu and Shen-Chuan Tai, Medical Image Compression by Discrete Cosine

Transform Spectral Similarity Strategy, IEEE Transactions on Information

Technology in Biomedicine, 5(3), (2001), 236-243.

[85] Yung-Gi Wu, Medical Image Compression by Sampling DCT Coefficients, IEEE

Transactions on Information Technology in Biomedicine, 6(1), (2002), 86-94.

Page 167: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

146

LIST OF PUBLICATIONS

International Journal - Published:

1. Perumal, B, M.Pallikonda Rajasekaran ”Efficient Image Compression

Techniques for Compressing Multimodal Medical images using Neural

Network Radial Basis Function Approach” publication in International

Journal of Imaging Systems and Technology Volume 25, Issue 2, pages

115–122, June 2015, Article first published online: 19 MAY

2015, DOI: 10.1002/ima.22127 (Impact Factor 1.301)

2. Praisline Jasmi R, Perumal B, Pallikonda Rajasekaran M “Comparison of

Medical Image Compression using DWT Algorithm and Neural Network

Techniques” AENSI Journal Advances in Natural and Applied Sciences

Vol. 8, No. 19, pp:1-9, November 2014.

3. Perumal, B, M.Pallikonda Rajasekaran “Compression Techniques for

Medical Images Using SPIHT ”Applied Mechanics and Materials

(Volume 626), pp-87-94, 2014, ISSN: 1662-7482 (Scopus Index)

International Journal Communicated:

1. Perumal, B, M.Pallikonda Rajasekaran "A Hybrid Approach Using

Fractal and Neural Network Radial Basis Function for Efficient

Compression of Multi Modal Medical Images" in International Journal of

Imaging Systems and Technology has been communicated

Page 168: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

147

International Conference Published

1. Mr.B.Perumal, Dr.M.Pallikonda Rajasekaran “A Hybrid Discrete

Wavelet Transform with Neural Network Back Propagation Approach for

Efficient medical Image Compression “International Conference on

Emerging Trends in Engineering, Technology and Science (ICETETS-

2016) at Kings College of Engineering on 24-26th February 2016

INDEXED IEEE EXPLORE

2. Perumal B, Praisline Jasmi R, Pallikonda Rajasekaran M “Comparison of

Image Compression Techniques using Huffman Coding, DWT and Fractal

Algorithm” attended 2015 International conference on Computer

Communication and Informatics connducted at Sri Shakthi College of

Engineering on 08th – 10th January 2015. INDEXED IEEE EXPLORE

3. Chithra, K., B. Perumal, M. Pallikonda Rajasekaran, and T. Arun Prasath.

"A quantitative assesment of image compression parameters and its

algorithm." In Communication Technologies (GCCT), 2015 Global

Conference on, pp. 294-296. IEEE, 2015. INDEXED IEEE EXPLORE

4. Praisline Jasmi R, Perumal B, Pallikonda Rajasekaran M “Comparison of

Medical Image Compression using DWT Algorithm and Neural Network

Techniques” attended 2015 International conference on Electrical,

Electronics, Instrumentation and Computer communication (E2IC2) 2014 at

Karpagam College of Engineering on 12th – 13th December 2014.

5. Perumal B, Pallikonda Rajasekaran M, and Duraiyarasan S, “Efficient

Image Compression Techniques for PET and MR brain images ”, IEEE

Sponsored Fourth International Conference on Recent Trends in

Information Technology, MIT, Chennai, April 10-12, 2014.

Page 169: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

148

6. Perumal B, Pallikonda Rajasekaran M, “Compression Techniques for

Medical images Using SPIHT”, International Conference on Energy

Efficient Technologies for Sustainability (ICEETS’14), St. Xaviers

Catholic College of Engineering, Nagercoil, April 7-9, 2014

Page 170: A HYBRID APPROACH FOR EFFICIENT COMPRESSION OF …shodhganga.inflibnet.ac.in/bitstream/10603/121385/1/perumal phd... · This is to certify that all corrections and suggestions pointed

149

CURRICULUM VITAE

Mr. B. Perumal was born at Bodinayakanur, India in 1980. He graduated in

Electronics and Communication Engineering from Madurai Kamaraj University

and post graduated in Digital Communication and Network Engineering in 2006

from Anna University, Chennai, India. He is doing Ph.D (Medical Image

Compression) in Kalasalingam University, Krishnankoil. Now he is working as

Assistant Professor in the Department of Electronics and Communication

Engineering, Kalasalingam University. He has published 3 International Journals

published 6 papers in International Level Conferences and 15 National level

Conferences. His research interests include Mobile Computing, Wireless Sensor

Networks, Cloud computing and Bio-medical Instrumentation and Medical Image

Compression.