DATA COMPRESSION
Prepared by ndash JAYPAL SINGH CHOUDHARY SOURABH JAINGraphics from - httpplusmathsorgissue23featuresdatadatajpg
Why Data Compression Definition Reducing the amount of data required to
represent a source of information Preserve the output data original to the input as much as possible Objectives Reduce the space required for the data storage Also reduce the time of data transmission over network
SOURCES - wwwdata-compressioncomindexshtml
Types of Compression Lossless compression Lossy compression Basic principle of both
Graphics from - httpimgzdnetcomtechDirectoryLOSSYGIF
Lossless CompressionIn this the compressing and
decompressing algorithms are inverse of each other
TECHNIQUES Run-Length Encoding When data contains repeated strings then these
can be replaced by special marker
original data compressed data
Sources- wwwdata-compressioncomlosslessshtml
572744444444321333333333335278222222 57274083213115278206
Lossless (contd) Statistical compression In this the short codes are used for
frequent symbols and long for infrequent Three common principles are - 1 Morse code 2 Huffman encoding 3 Lempel- Ziv -Welch encoding Relative compression Extremely useful for sending video
commercial TVs and30 frames in every second
References - wwwdata-compressioncomlosslessshtml
Lossy compression Some data in output is lost but not
detected by users Mostly used for pictures videos and
sounds Basic techniques are
1 JPEG 2 MPEG
Referenced -httpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html
Transformation
Quantisation
Encoding
decompress
compress
Latest Developments Fathom 30 Developed by Inlet technologies in
cooperation with Microsoft and Scientific Atlanta
Work with media files for mobiles portable web and high definition
Histor1048668A literature compendium for a large variety ofAudiocoding systems was published in the IEEEJournal on Selected Areas in Communications(JSAC) February 1988 While there weresome papers from before that time thisCollection documented an entire variety of finished working audio coders nearly all ofthem using perceptual (ie masking)Techniquce and some kind of frequencyanalysis and back End noiseless coding
Image Compression Using Neural Networks Overview - Introduction to neural networks Back Propagated (BP) neural network - Image compression using BP neural network- Comparison with existing image compression techniques
Image Compression using BP Neural Network
- Future of Image Coding(analogous to Our visual system) - Narrow Channel K-L transform - The entropy coding of the state vector h is
at the hidden Layer
Image Compression using continuedhellip
- A set of image samples is used to train the network
- This is equivalent to compressing the input into the narrow channel and then reconstructing the input from the hidden layer
- The image to be subdivided into non-overlapping blocks of n x n pixels each Such block represents N-dimensional vector x N = n x n in N-dimensional space Transformation process maps this set of vectors into y=W (input)
output=W-1y
Transform coding with multilayer Neural Network
Image Compression continuedhellip
The inverse transformation need toreconstruct original image withminimum ofdistortions
Proposed Method
- Wavelet packet decomposition - Quantization - Organization of vectors - Neural network approximation - Lossless encoding and reduction
Wavelet Packet Decomposition
The image is first put through a fewlevels ofwavelet packet decomposition
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Why Data Compression Definition Reducing the amount of data required to
represent a source of information Preserve the output data original to the input as much as possible Objectives Reduce the space required for the data storage Also reduce the time of data transmission over network
SOURCES - wwwdata-compressioncomindexshtml
Types of Compression Lossless compression Lossy compression Basic principle of both
Graphics from - httpimgzdnetcomtechDirectoryLOSSYGIF
Lossless CompressionIn this the compressing and
decompressing algorithms are inverse of each other
TECHNIQUES Run-Length Encoding When data contains repeated strings then these
can be replaced by special marker
original data compressed data
Sources- wwwdata-compressioncomlosslessshtml
572744444444321333333333335278222222 57274083213115278206
Lossless (contd) Statistical compression In this the short codes are used for
frequent symbols and long for infrequent Three common principles are - 1 Morse code 2 Huffman encoding 3 Lempel- Ziv -Welch encoding Relative compression Extremely useful for sending video
commercial TVs and30 frames in every second
References - wwwdata-compressioncomlosslessshtml
Lossy compression Some data in output is lost but not
detected by users Mostly used for pictures videos and
sounds Basic techniques are
1 JPEG 2 MPEG
Referenced -httpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html
Transformation
Quantisation
Encoding
decompress
compress
Latest Developments Fathom 30 Developed by Inlet technologies in
cooperation with Microsoft and Scientific Atlanta
Work with media files for mobiles portable web and high definition
Histor1048668A literature compendium for a large variety ofAudiocoding systems was published in the IEEEJournal on Selected Areas in Communications(JSAC) February 1988 While there weresome papers from before that time thisCollection documented an entire variety of finished working audio coders nearly all ofthem using perceptual (ie masking)Techniquce and some kind of frequencyanalysis and back End noiseless coding
Image Compression Using Neural Networks Overview - Introduction to neural networks Back Propagated (BP) neural network - Image compression using BP neural network- Comparison with existing image compression techniques
Image Compression using BP Neural Network
- Future of Image Coding(analogous to Our visual system) - Narrow Channel K-L transform - The entropy coding of the state vector h is
at the hidden Layer
Image Compression using continuedhellip
- A set of image samples is used to train the network
- This is equivalent to compressing the input into the narrow channel and then reconstructing the input from the hidden layer
- The image to be subdivided into non-overlapping blocks of n x n pixels each Such block represents N-dimensional vector x N = n x n in N-dimensional space Transformation process maps this set of vectors into y=W (input)
output=W-1y
Transform coding with multilayer Neural Network
Image Compression continuedhellip
The inverse transformation need toreconstruct original image withminimum ofdistortions
Proposed Method
- Wavelet packet decomposition - Quantization - Organization of vectors - Neural network approximation - Lossless encoding and reduction
Wavelet Packet Decomposition
The image is first put through a fewlevels ofwavelet packet decomposition
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Types of Compression Lossless compression Lossy compression Basic principle of both
Graphics from - httpimgzdnetcomtechDirectoryLOSSYGIF
Lossless CompressionIn this the compressing and
decompressing algorithms are inverse of each other
TECHNIQUES Run-Length Encoding When data contains repeated strings then these
can be replaced by special marker
original data compressed data
Sources- wwwdata-compressioncomlosslessshtml
572744444444321333333333335278222222 57274083213115278206
Lossless (contd) Statistical compression In this the short codes are used for
frequent symbols and long for infrequent Three common principles are - 1 Morse code 2 Huffman encoding 3 Lempel- Ziv -Welch encoding Relative compression Extremely useful for sending video
commercial TVs and30 frames in every second
References - wwwdata-compressioncomlosslessshtml
Lossy compression Some data in output is lost but not
detected by users Mostly used for pictures videos and
sounds Basic techniques are
1 JPEG 2 MPEG
Referenced -httpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html
Transformation
Quantisation
Encoding
decompress
compress
Latest Developments Fathom 30 Developed by Inlet technologies in
cooperation with Microsoft and Scientific Atlanta
Work with media files for mobiles portable web and high definition
Histor1048668A literature compendium for a large variety ofAudiocoding systems was published in the IEEEJournal on Selected Areas in Communications(JSAC) February 1988 While there weresome papers from before that time thisCollection documented an entire variety of finished working audio coders nearly all ofthem using perceptual (ie masking)Techniquce and some kind of frequencyanalysis and back End noiseless coding
Image Compression Using Neural Networks Overview - Introduction to neural networks Back Propagated (BP) neural network - Image compression using BP neural network- Comparison with existing image compression techniques
Image Compression using BP Neural Network
- Future of Image Coding(analogous to Our visual system) - Narrow Channel K-L transform - The entropy coding of the state vector h is
at the hidden Layer
Image Compression using continuedhellip
- A set of image samples is used to train the network
- This is equivalent to compressing the input into the narrow channel and then reconstructing the input from the hidden layer
- The image to be subdivided into non-overlapping blocks of n x n pixels each Such block represents N-dimensional vector x N = n x n in N-dimensional space Transformation process maps this set of vectors into y=W (input)
output=W-1y
Transform coding with multilayer Neural Network
Image Compression continuedhellip
The inverse transformation need toreconstruct original image withminimum ofdistortions
Proposed Method
- Wavelet packet decomposition - Quantization - Organization of vectors - Neural network approximation - Lossless encoding and reduction
Wavelet Packet Decomposition
The image is first put through a fewlevels ofwavelet packet decomposition
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Lossless CompressionIn this the compressing and
decompressing algorithms are inverse of each other
TECHNIQUES Run-Length Encoding When data contains repeated strings then these
can be replaced by special marker
original data compressed data
Sources- wwwdata-compressioncomlosslessshtml
572744444444321333333333335278222222 57274083213115278206
Lossless (contd) Statistical compression In this the short codes are used for
frequent symbols and long for infrequent Three common principles are - 1 Morse code 2 Huffman encoding 3 Lempel- Ziv -Welch encoding Relative compression Extremely useful for sending video
commercial TVs and30 frames in every second
References - wwwdata-compressioncomlosslessshtml
Lossy compression Some data in output is lost but not
detected by users Mostly used for pictures videos and
sounds Basic techniques are
1 JPEG 2 MPEG
Referenced -httpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html
Transformation
Quantisation
Encoding
decompress
compress
Latest Developments Fathom 30 Developed by Inlet technologies in
cooperation with Microsoft and Scientific Atlanta
Work with media files for mobiles portable web and high definition
Histor1048668A literature compendium for a large variety ofAudiocoding systems was published in the IEEEJournal on Selected Areas in Communications(JSAC) February 1988 While there weresome papers from before that time thisCollection documented an entire variety of finished working audio coders nearly all ofthem using perceptual (ie masking)Techniquce and some kind of frequencyanalysis and back End noiseless coding
Image Compression Using Neural Networks Overview - Introduction to neural networks Back Propagated (BP) neural network - Image compression using BP neural network- Comparison with existing image compression techniques
Image Compression using BP Neural Network
- Future of Image Coding(analogous to Our visual system) - Narrow Channel K-L transform - The entropy coding of the state vector h is
at the hidden Layer
Image Compression using continuedhellip
- A set of image samples is used to train the network
- This is equivalent to compressing the input into the narrow channel and then reconstructing the input from the hidden layer
- The image to be subdivided into non-overlapping blocks of n x n pixels each Such block represents N-dimensional vector x N = n x n in N-dimensional space Transformation process maps this set of vectors into y=W (input)
output=W-1y
Transform coding with multilayer Neural Network
Image Compression continuedhellip
The inverse transformation need toreconstruct original image withminimum ofdistortions
Proposed Method
- Wavelet packet decomposition - Quantization - Organization of vectors - Neural network approximation - Lossless encoding and reduction
Wavelet Packet Decomposition
The image is first put through a fewlevels ofwavelet packet decomposition
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Lossless (contd) Statistical compression In this the short codes are used for
frequent symbols and long for infrequent Three common principles are - 1 Morse code 2 Huffman encoding 3 Lempel- Ziv -Welch encoding Relative compression Extremely useful for sending video
commercial TVs and30 frames in every second
References - wwwdata-compressioncomlosslessshtml
Lossy compression Some data in output is lost but not
detected by users Mostly used for pictures videos and
sounds Basic techniques are
1 JPEG 2 MPEG
Referenced -httpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html
Transformation
Quantisation
Encoding
decompress
compress
Latest Developments Fathom 30 Developed by Inlet technologies in
cooperation with Microsoft and Scientific Atlanta
Work with media files for mobiles portable web and high definition
Histor1048668A literature compendium for a large variety ofAudiocoding systems was published in the IEEEJournal on Selected Areas in Communications(JSAC) February 1988 While there weresome papers from before that time thisCollection documented an entire variety of finished working audio coders nearly all ofthem using perceptual (ie masking)Techniquce and some kind of frequencyanalysis and back End noiseless coding
Image Compression Using Neural Networks Overview - Introduction to neural networks Back Propagated (BP) neural network - Image compression using BP neural network- Comparison with existing image compression techniques
Image Compression using BP Neural Network
- Future of Image Coding(analogous to Our visual system) - Narrow Channel K-L transform - The entropy coding of the state vector h is
at the hidden Layer
Image Compression using continuedhellip
- A set of image samples is used to train the network
- This is equivalent to compressing the input into the narrow channel and then reconstructing the input from the hidden layer
- The image to be subdivided into non-overlapping blocks of n x n pixels each Such block represents N-dimensional vector x N = n x n in N-dimensional space Transformation process maps this set of vectors into y=W (input)
output=W-1y
Transform coding with multilayer Neural Network
Image Compression continuedhellip
The inverse transformation need toreconstruct original image withminimum ofdistortions
Proposed Method
- Wavelet packet decomposition - Quantization - Organization of vectors - Neural network approximation - Lossless encoding and reduction
Wavelet Packet Decomposition
The image is first put through a fewlevels ofwavelet packet decomposition
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Lossy compression Some data in output is lost but not
detected by users Mostly used for pictures videos and
sounds Basic techniques are
1 JPEG 2 MPEG
Referenced -httpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html
Transformation
Quantisation
Encoding
decompress
compress
Latest Developments Fathom 30 Developed by Inlet technologies in
cooperation with Microsoft and Scientific Atlanta
Work with media files for mobiles portable web and high definition
Histor1048668A literature compendium for a large variety ofAudiocoding systems was published in the IEEEJournal on Selected Areas in Communications(JSAC) February 1988 While there weresome papers from before that time thisCollection documented an entire variety of finished working audio coders nearly all ofthem using perceptual (ie masking)Techniquce and some kind of frequencyanalysis and back End noiseless coding
Image Compression Using Neural Networks Overview - Introduction to neural networks Back Propagated (BP) neural network - Image compression using BP neural network- Comparison with existing image compression techniques
Image Compression using BP Neural Network
- Future of Image Coding(analogous to Our visual system) - Narrow Channel K-L transform - The entropy coding of the state vector h is
at the hidden Layer
Image Compression using continuedhellip
- A set of image samples is used to train the network
- This is equivalent to compressing the input into the narrow channel and then reconstructing the input from the hidden layer
- The image to be subdivided into non-overlapping blocks of n x n pixels each Such block represents N-dimensional vector x N = n x n in N-dimensional space Transformation process maps this set of vectors into y=W (input)
output=W-1y
Transform coding with multilayer Neural Network
Image Compression continuedhellip
The inverse transformation need toreconstruct original image withminimum ofdistortions
Proposed Method
- Wavelet packet decomposition - Quantization - Organization of vectors - Neural network approximation - Lossless encoding and reduction
Wavelet Packet Decomposition
The image is first put through a fewlevels ofwavelet packet decomposition
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Latest Developments Fathom 30 Developed by Inlet technologies in
cooperation with Microsoft and Scientific Atlanta
Work with media files for mobiles portable web and high definition
Histor1048668A literature compendium for a large variety ofAudiocoding systems was published in the IEEEJournal on Selected Areas in Communications(JSAC) February 1988 While there weresome papers from before that time thisCollection documented an entire variety of finished working audio coders nearly all ofthem using perceptual (ie masking)Techniquce and some kind of frequencyanalysis and back End noiseless coding
Image Compression Using Neural Networks Overview - Introduction to neural networks Back Propagated (BP) neural network - Image compression using BP neural network- Comparison with existing image compression techniques
Image Compression using BP Neural Network
- Future of Image Coding(analogous to Our visual system) - Narrow Channel K-L transform - The entropy coding of the state vector h is
at the hidden Layer
Image Compression using continuedhellip
- A set of image samples is used to train the network
- This is equivalent to compressing the input into the narrow channel and then reconstructing the input from the hidden layer
- The image to be subdivided into non-overlapping blocks of n x n pixels each Such block represents N-dimensional vector x N = n x n in N-dimensional space Transformation process maps this set of vectors into y=W (input)
output=W-1y
Transform coding with multilayer Neural Network
Image Compression continuedhellip
The inverse transformation need toreconstruct original image withminimum ofdistortions
Proposed Method
- Wavelet packet decomposition - Quantization - Organization of vectors - Neural network approximation - Lossless encoding and reduction
Wavelet Packet Decomposition
The image is first put through a fewlevels ofwavelet packet decomposition
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Histor1048668A literature compendium for a large variety ofAudiocoding systems was published in the IEEEJournal on Selected Areas in Communications(JSAC) February 1988 While there weresome papers from before that time thisCollection documented an entire variety of finished working audio coders nearly all ofthem using perceptual (ie masking)Techniquce and some kind of frequencyanalysis and back End noiseless coding
Image Compression Using Neural Networks Overview - Introduction to neural networks Back Propagated (BP) neural network - Image compression using BP neural network- Comparison with existing image compression techniques
Image Compression using BP Neural Network
- Future of Image Coding(analogous to Our visual system) - Narrow Channel K-L transform - The entropy coding of the state vector h is
at the hidden Layer
Image Compression using continuedhellip
- A set of image samples is used to train the network
- This is equivalent to compressing the input into the narrow channel and then reconstructing the input from the hidden layer
- The image to be subdivided into non-overlapping blocks of n x n pixels each Such block represents N-dimensional vector x N = n x n in N-dimensional space Transformation process maps this set of vectors into y=W (input)
output=W-1y
Transform coding with multilayer Neural Network
Image Compression continuedhellip
The inverse transformation need toreconstruct original image withminimum ofdistortions
Proposed Method
- Wavelet packet decomposition - Quantization - Organization of vectors - Neural network approximation - Lossless encoding and reduction
Wavelet Packet Decomposition
The image is first put through a fewlevels ofwavelet packet decomposition
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Image Compression Using Neural Networks Overview - Introduction to neural networks Back Propagated (BP) neural network - Image compression using BP neural network- Comparison with existing image compression techniques
Image Compression using BP Neural Network
- Future of Image Coding(analogous to Our visual system) - Narrow Channel K-L transform - The entropy coding of the state vector h is
at the hidden Layer
Image Compression using continuedhellip
- A set of image samples is used to train the network
- This is equivalent to compressing the input into the narrow channel and then reconstructing the input from the hidden layer
- The image to be subdivided into non-overlapping blocks of n x n pixels each Such block represents N-dimensional vector x N = n x n in N-dimensional space Transformation process maps this set of vectors into y=W (input)
output=W-1y
Transform coding with multilayer Neural Network
Image Compression continuedhellip
The inverse transformation need toreconstruct original image withminimum ofdistortions
Proposed Method
- Wavelet packet decomposition - Quantization - Organization of vectors - Neural network approximation - Lossless encoding and reduction
Wavelet Packet Decomposition
The image is first put through a fewlevels ofwavelet packet decomposition
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Image Compression using BP Neural Network
- Future of Image Coding(analogous to Our visual system) - Narrow Channel K-L transform - The entropy coding of the state vector h is
at the hidden Layer
Image Compression using continuedhellip
- A set of image samples is used to train the network
- This is equivalent to compressing the input into the narrow channel and then reconstructing the input from the hidden layer
- The image to be subdivided into non-overlapping blocks of n x n pixels each Such block represents N-dimensional vector x N = n x n in N-dimensional space Transformation process maps this set of vectors into y=W (input)
output=W-1y
Transform coding with multilayer Neural Network
Image Compression continuedhellip
The inverse transformation need toreconstruct original image withminimum ofdistortions
Proposed Method
- Wavelet packet decomposition - Quantization - Organization of vectors - Neural network approximation - Lossless encoding and reduction
Wavelet Packet Decomposition
The image is first put through a fewlevels ofwavelet packet decomposition
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Image Compression using continuedhellip
- A set of image samples is used to train the network
- This is equivalent to compressing the input into the narrow channel and then reconstructing the input from the hidden layer
- The image to be subdivided into non-overlapping blocks of n x n pixels each Such block represents N-dimensional vector x N = n x n in N-dimensional space Transformation process maps this set of vectors into y=W (input)
output=W-1y
Transform coding with multilayer Neural Network
Image Compression continuedhellip
The inverse transformation need toreconstruct original image withminimum ofdistortions
Proposed Method
- Wavelet packet decomposition - Quantization - Organization of vectors - Neural network approximation - Lossless encoding and reduction
Wavelet Packet Decomposition
The image is first put through a fewlevels ofwavelet packet decomposition
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
- The image to be subdivided into non-overlapping blocks of n x n pixels each Such block represents N-dimensional vector x N = n x n in N-dimensional space Transformation process maps this set of vectors into y=W (input)
output=W-1y
Transform coding with multilayer Neural Network
Image Compression continuedhellip
The inverse transformation need toreconstruct original image withminimum ofdistortions
Proposed Method
- Wavelet packet decomposition - Quantization - Organization of vectors - Neural network approximation - Lossless encoding and reduction
Wavelet Packet Decomposition
The image is first put through a fewlevels ofwavelet packet decomposition
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Image Compression continuedhellip
The inverse transformation need toreconstruct original image withminimum ofdistortions
Proposed Method
- Wavelet packet decomposition - Quantization - Organization of vectors - Neural network approximation - Lossless encoding and reduction
Wavelet Packet Decomposition
The image is first put through a fewlevels ofwavelet packet decomposition
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Proposed Method
- Wavelet packet decomposition - Quantization - Organization of vectors - Neural network approximation - Lossless encoding and reduction
Wavelet Packet Decomposition
The image is first put through a fewlevels ofwavelet packet decomposition
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Wavelet Packet Decomposition
The image is first put through a fewlevels ofwavelet packet decomposition
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Quantization
- Each of the decomposed wavelet sections is divided by the quantization value and rounded to the nearest integer
- This creates redundancy in the data which is easier to work with
- Quantization is not lossless
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Neural Network Approximation
-An example of the vector with the trainedNeural network attempting to fit it
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Lossless Encoding and Reduction
- The entire data stream is then run-lengthencoded (RLE)
- Afterwards we can save the data using the ZIP file format which applies some other lossless encoding methods
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Conclusion
- Neural networks can be used to compress images
- However they are probably not the best way to go unless the data can be
represented in some easier way- Most of the compression came from the
quantization organization and Lossless compression stages
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
References
1 httpenwikipediaorgwikiData_compression
2 httpenwikipediaorgwikiLossless_data_compression
3 httpenwikibooksorgwikiData_Coding_TheoryData_Compression
4 httpenwikibooksorgwikiData_Compression
5 httpdatacompressiondogmanetindexphptitle=Compcompression_FAQ
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain
Annotated Bibliography I choose the text from ndash wwwdata-compressioncomindexshtml wwwdata-compressioncomlosslessshtmlhttpsearchciomidmarkettechtargetcomsDefinition0sid183_gci21445300html httplocaltechwirecombusinesslocal_tech_wirewirestory1276887 httpwwwfutureofgadgetscomfuturebloggershow1730
because it fulfills mine requirement for the topic I choose the graphics from ndash httpimgzdnetcomtechDirectoryLOSSYGIF
httpplusmathsorgissue23featuresdatadatajpg because it clears the situation which I want to explain