13
Engineering Applications of EHW

[Lecture Notes in Computer Science] Evolvable Systems: From Biology to Hardware Volume 1259 || Data compression based on Evolvable hardware

  • Upload
    weixin

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: [Lecture Notes in Computer Science] Evolvable Systems: From Biology to Hardware Volume 1259 || Data compression based on Evolvable hardware

Engineering Applications of EHW

Page 2: [Lecture Notes in Computer Science] Evolvable Systems: From Biology to Hardware Volume 1259 || Data compression based on Evolvable hardware
Page 3: [Lecture Notes in Computer Science] Evolvable Systems: From Biology to Hardware Volume 1259 || Data compression based on Evolvable hardware

Data Compression Based on Evolvable Hardware

Mehrdad Salami*, Masahiro Murakawa *~ and Tetsuya Higuchi ~**

Computation Models Section Electrotechnical Laboratory (ETL)

1-1-4 Umezono, Tsukuba, Ibaraki, 305, Japan. e-mail: [email protected]

Abs t r ac t . We have investigated the possibility of applying Evolvable Hardware (EHW) to data compression applications. One of the inter- esting area in data compression is Predictive Coding which we used for compressing block of data in the hardware configuration of EHW. The advantage of this approach is simplieit.y, adaptability, real time imple- mentation for motion pictures and advantage of using non-linear pre- diction functions. Several configurations of EHW are tested to find the optimal system for data compression and the results show good perfor- mance compared with Neural Networks and JPEG approaches.

1 I n t r o d u c t i o n

Over the last ten years the idea of evolutionary systems have been a t t rac ted many people in engineering and computer science. Many systems in those fields are not optimized or don ' t have enough flexibility to work in different situations. Evolutionary systems claim to provide bet ter performance for t ime-varying sys- tems [1]. As many systems now working with high speed, a faster solution with high flexibility to deal with different conditions is highly required. Specialized hardware can be applied to these systems for speed improvement [2]. On the other hand specialized hardware doesn' t have enough flexibility in t ime-varying systems and produce poor results in critical situations. Another approach is us- ing reprogrammable hardware with the advantage of programming hardware for different situations [3]. The speed of reprogrammable hardware is lower than specialized hardware but gives more flexibility. The main question is how to find the best hardware architecture for a new situation in the system.

The approach tha t we choose here is based on a predefined hardware ar- chitecture which contains many non-linear functions and uses an evolutionary algorithm to find the best configuration for different situations. A system called Evolvable Hardware (EHW) is designed and will be fabricated in the near fu- ture. It contains floating point hardware blocks, programmable by a Genetic

* The New Energy and Industrial Technology Development Organization (NEDO) ** University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo; Japan

*** Electroteehnieal Laboratory, 1-1-4 Umezono, Tsukuba~ Ibaraki. Japan

Page 4: [Lecture Notes in Computer Science] Evolvable Systems: From Biology to Hardware Volume 1259 || Data compression based on Evolvable hardware

170

Algorithm (GA) software program. The software GA is capable of changing the architecture of EHW for different situations which bring adaptive behavior to the system. The system benefits from hardware speed and potential of applying to real t ime systems.

The rest of this paper is as follow. In the section 2 Evolvable Hardware will be explained in details. Section 3 explains the Data Compression (DC). Section 4 demonstrates how E H W can be applied to DC. Section 5 represents simulation results of da ta compression by E H W and compares it with other methods.

2 E v o l v a b l e H a r d w a r e

Research on E H W was initiated independently in Japan and in Switzerland around 1992 (for recent overviews, see [4] and [5]). Since then, the interest is growing rapidly (e.g., EVOLVE95, the first international workshop on evolvable hardware was held in Lausanne in 1995).

Most research on EHW, however, has the common problem that the evolved circuit size is small. The hardware evolution is based on primitive gates such as AND-gate and OR-gate, we call evolution at this level gate-level evolution. Gate-level evolution is not powerful for the use in industrial applications.

In order to solve this problem, we propose a new type of hardware evolution, fimction-level evolution, and a new F P G A (Field Programmable Gate Array) architecture dedicated to function-level evolution. Actually E H W can synthesize non-linear function genetically. This suggests that EHW may substi tute Arti- ficial Neural Networks (ANN) in industrial applications because E H W enables faster and more compact implementat ion than ANN, in addition to bet ter un- derstandability of learned results.

We use the F P G A model in Figure 1 to realize function-level evolution. The F P G A model consists of 20 columns, each containing five Programmable Float- ing processing Units (PFUs). Each P F U can implement one of the following seven functions: an adder, a subtracter, an if-then, a sine generator, a cosine generator, a multiplier, and a divider. The selection of the function to be im- plemented by a PFU is determined by chromosome genes given to the PFU. Constant generators are also included in each PFU. Columns are interconnected by crossbar switches. The crossbars determine inputs to PFUs.

In addition to these columns, a state register holding a past output is pre- pared for applications which deal with time-continuous data. This F P G A model assumes two inputs and one output. Da ta handled by FPGA are floating point numbers [6].

EHW can be uses in many kinds of applications where hardware specifications are not known. I t may replace ANNs in industrial applications because of real- t ime response and compatibil i ty with time continuous data. We are developing a prototype system board to demonstrate the on-line evolution of a hardware device.

Page 5: [Lecture Notes in Computer Science] Evolvable Systems: From Biology to Hardware Volume 1259 || Data compression based on Evolvable hardware

171

x

Input

Y

Ex~mat a~k

• istColumn ~ X Y ~ ~ndCohmal

~ PFU ~ ! PFU

- PFU ! PFU cos, ?xP. AFdWr ) l l[l[]- (9ex. SIN

..PFU I~

feed back

67891~! 20th Col~n ithh --~ I PFU (ex. Divi& 97

PFU

98 -~ PFU

PFU

100 PFU

PFU Programmable Floating processing Unit

Fig. 1. The FPGA model for function level evolution

Output = Z

3 Data Compression and Predictive Coding

Digital transmission is a dominant means of communication for voice and im- age. I t is expected to provide flexibility, reliability and cost effective, with the added potential for communicat ion privacy and security through encryption. The cost of digital storage and transmission media are generally proportional to the amount of digital da ta tha t can be stored or t ransmit ted . While the cost of such media decreases every year, the demand for their use increases at an even higher rate. Therefore there is a continuing need to minimize the number of bits necessary to t ransmit images while maintaining acceptable image quality.

Normally, images show a high degree of correlation among neighboring sam- ples. A high degree of correlation implies a high degree of redundancy in the raw data. Therefore if the degree of redundancy is removed, a more efficient and hence compressed coding of the signal is possible [7]. This can be achieved through the use of Predictive Coding (PC).

Figure 2 shows a block diagram of the predictive coding system. In the figure, x(n) are the input values e.g. pixel values, ~(n) are the predicted values, e(n) are error values equal to ~(n) - x(n), y(n) are reconstructed values and ~(n) is t ransmit ted error value after quantization.

In predictive coding an image will be replaced by [8] 1 - A formula which predict each pixel by previous or neighboring pixels and 2 - The error at each pixel which is the difference between the predicted value

and the original value.

Page 6: [Lecture Notes in Computer Science] Evolvable Systems: From Biology to Hardware Volume 1259 || Data compression based on Evolvable hardware

172

In this coding repeating or correlated information can be saved in formula and uncorrelated information will be handled by the error values. This algorithm can be used for lossy or lossless compression. In lossy compression the error will be ignored or sent partially and in lossless compression the error fully quantised and recovered in receiver.

x(n)

A

x(n)

< e (n ) I A

. Q u a n t i z a t i o n ) "1 ' e(n)

P r e d i c t i o n y ( n )

Fig. 2. Block diagram of the Predictive Coding.

Predictive coding formula may be linear or non-linear [9]. In linear Predictive Coding, a gray level g(i, j ) is predicted by the linear combination of four pixel values as shown in Figure 3. The coefficients of the linear prediction are deter- mined by adapting the least square method to the neighboring area of g(i , j ) .

In a non-linear approach, a function fwill be selected and optimized to predict the pixel values. If the same neighboring as linear PC is used then each pixel can be estimated as

g( i , j ) = f [ g ( i - l , j - 1 ) , g ( i , j - 1 ) , g ( i - l , j ) , g ( i + l , j - 1 ) ]

J P E G (Joint Photographic Exper t Group) uses a simple PC for lossless com- pression. The predictor function in J P E G combines the values of up to three neighboring samples ( g(i - 1,j) , g ( i , j - 1) and gIi - 1, j - 1) ) to form a pre- diction of g(i, j ) in Figure 3. This prediction is then subtracted from the actual value of g ( i , j ) , and the difference is encoded by entropy coding methods. Any one of the seven prediction functions listed in Table 1 can be used. Functions 1, 2 and 3 are used ibr one dimensional prediction and functions 4, 5, 6 and 7 are two dimensional prediction [10].

Page 7: [Lecture Notes in Computer Science] Evolvable Systems: From Biology to Hardware Volume 1259 || Data compression based on Evolvable hardware

173

g(i-l,j-1)

g(i-l,j)

g(i ,j-1)

g(i,j)

g(i+l,j-1)

Fig. 3. Prediction of g( i , j ) from neighboring four pixels.

Table 1. Prediction function for JPEG lossless coding.

No. Prediction Function f 1 g( i - 1,j) 2 g ( i , j - 1) 3 g ( i - 1 , j - - 1) 4 g ( i - l , j ) + g ( i , j - - 1 ) - g ( i - l , j - 1 )

5 g( i -- 1,j) -I- ( g ( i , j -- 1) -- g( i - 1,j - 1))/2 6 g ( i , j - 1) + ( g ( i - 1,j) -- g ( i - 1 , j - 1))/2 7 ( g ( i - 1,j) + g ( i , j - 1))/2

Most of the works so far in PC is based on linear functions, but this method is not always effective because it assumes the image obeys a linear model and the statistical properties of an image remains constant for the whole image [11]. In our work these two assumptions are removed and more flexible and effective algorithm based on EHW is applied. The EHW because of the structure of PFUs is designed to find a non-linear function for prediction. For compression, the image will be divided into smaller blocks and a non-linear function should be found for each block. It guarantees selection of a function for different parts of the image. The next section explains more details of this approach.

4 E H W for Predict ive Coding

EHW can be considered as a predictive function for optimization. This function is not linear and changes depends on characteristic of the block for compression. In this approach one image is divided into a number of blocks (Figure 4) and then

Page 8: [Lecture Notes in Computer Science] Evolvable Systems: From Biology to Hardware Volume 1259 || Data compression based on Evolvable hardware

174

one function has to be found for each block. Selecting one function for the whole image will produce poor results. By using E H W for finding function for different region of the image we actually implementing an adaptive prediction coding for the image. Using non-linear function in E H W will enhance the performance because most blocks show a non-linear relationship.

Selection of the block size and the neighboring pixels are very important for our approach. If the block size is too small, then the EHW chromosome will be too long to represent block in compressed format. Assume a block size of 2 by 2 pixels is used for PC and the chromosome length for EHW is 250 on average then 32 (2*2*8) bits information of each block will be replaced by 250 bits[ On the contrary, if the block size is too large, then the EHW cannot reflect the characteristic in a local area and that leads to poor performance. Practically finding a prediction function for the large blocks (say 64 by 64 pixels) is impossible even for non-linear functions. Besides the t ime required by EHW to find the function will be extremely high if the block size is very small or very large. If the block size is small then too many blocks should be compressed and if the block size is large then the computat ion time for finding function will be high. So there should be an opt imum block size. For E H W we tested blocks of 8 by 8, 16 by 16 and 32 by 32 which are used in other works and we found 16 by 16 produces the best result regarding the quality of picture and the number of bits generated for each block.

one Block = ]6 by 16 pixels

y

||||:||||||||||=|||||||::||||||:~::||:;ii13111 l | | | | | | | l | | l | | l | 3 l | | l | | l l | l | | l | | l | | | | : : : : : : | | : , , , , , , , , , , , , , , , , . . , , , , , , , , , , , , , , , , , , , , , . . . , , , , H , , , , , , u , , , , , , , , , , , , , , , . , , , , , . , + I , . . , , , , , , , i , , , , , , , , , , , , , , , , . , , , , , , , , , , , , , , , , . . , , , . , , , , . , ,

E M ! ! t ! ! i i i M i W W IUlIIIIIIIIIIIIIIIIIIIIIIIUlIIIIIIIIIII||III !!!!!!!!H!!!!!!!!!!!!!!!!!!!H!!iiiii!iiii!!! iifififiifififififififiiiiiWtfii iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii II!!II!I!!!!!!!!I!!!IIIIIIIIIIIIIIUlIIIIIIIII i i i i i i i i i i i i i i i i i i i i i l i i i l i i i i i i i i l i l i i i i i i i i i i 11111111111111111111111111111111111111111111111 !!!!!i i!::::::::m=::m:m:m::=::::::::::l

256 by 256 pixel image 16 by 16 block image

Fig. 4. Dividing a 256 by 256 pixel image to the blocks of 16 by 16 pixels.

Page 9: [Lecture Notes in Computer Science] Evolvable Systems: From Biology to Hardware Volume 1259 || Data compression based on Evolvable hardware

175

If the number of neighboring pixels is small then finding an appropriate function to predict for all block will be difficult. On the other hand high number of neighboring pixels will take a long t ime to find the function. Not only number of neighbors is impor tant but the location of neighbor pixel is also important . For E H W we select the neighboring which was shown in Figure 3. I t is used in Predictive Coding for most of linear and non-linear functions, and we also follow tha t configuration.

5 S imula t ion Resu l t s

The system described above was used to generate compressed data for digital images. A 256 by 256 pixels image of Lenna was used for testing. As the image was gray each pixel has an intensity value between 0 and 255. The image was parti t ioned into 256 block each containing 256 pixels. For each block, one con- figuration was found for E H W to predict pixel values based on their neighboring pixels. Then each block could be compressed as one E H W configuration. I f the quality of predicted pixels is not good (low signal to noise ratio) extra bits are required to send along the E H W configuration to increase the quality of picture.

Many E H W configurations are tested to find the best performance. First three block sizes of 8 by 8, 16 by 16 and 32 by 32 are tested to find the best one. The block size of 16 by 16, as mentioned in previous section, is produced the best results. Second we need to find what distribution of pixel values is matched with EHW. Each algorithm in da ta compression requires an appropriate distribution of pixel values. Three different distribution coding of pixel values are tested.

1 - The pixel value varies between -128 and 127. 2 - The pixel values varies between 0 and 255. 3 - Mixture of the above two coding. Originally pixel value varies between

-128 and 127 and then negative parts is transferred to 128 to 255. So the code is finally varying between 0 to 255.

Table 2 compares the simulation results for the above codings.

Table 2. Signal to Noise Ratio (SNR) and bit per pixel (bpp) for different codings schemes (population=200 and generation=1000).

Coding Scheme 1 Scheme 2 Scheme 3

Signal to Noise Ratio (SNR) bit per pixel (bpp) 20.11 db 1.35 19.90 db 1.33 28.54 db 0.88

According to the table, the third coding produce the best results. Both the SNR and bpp are much bet ter than the two other coding schemes.

Page 10: [Lecture Notes in Computer Science] Evolvable Systems: From Biology to Hardware Volume 1259 || Data compression based on Evolvable hardware

176

In the next experiment the population size and the number of generations are tested to find out the performance for different execution times. Normally in real time operation the search must terminate as fast as possible and this simulation shows how much performance we will lose if the search is forced to terminate early. Table 3 demonstrate simulations when the population size changes and Table 4 shows the results of simulations when the number of generations is changed.

Table 3. Simulation results for two different population size (generation=1000 and scheme 3).

Population SignM to Noise Ratio (SNR) bit per pixel (bpp) 50 27.92 db 0.93

200 28.54 db 0.88

Table 4. Simulation results for two different generations(population=200 and scheme .3).

Generations Signa! to Noise Ratio (SNR) bit per pixel (bpp) 100 27.45 db 0.97

1000 28.54 db 0.88

These two tables show by decreasing the amount of time to find the best configuration, the performance will be reduced but the reduction is not very large.

Most of algorithms in data compression provide an parameter for user to control the quality of image or SNR. In EHW approach we introduced a threshold parameter for SNR of a block. For each block if EHW can compress the block with SNR greater than the threshold value then no more error data is required for compression. On the other hand if the SNR of block is less than the threshold then error data is necessary for compression to have a target SNR. In the last experiment the threshold for sending extra bits for the block is examined. Two threshold values of 20 db and 25 db are tested and the simulation results are presented in the Table 5.

The table shows large difference in bit per pixel and suggests that 20db is a good choice but the SNR must be improved for that case.

Page 11: [Lecture Notes in Computer Science] Evolvable Systems: From Biology to Hardware Volume 1259 || Data compression based on Evolvable hardware

177

Table 5. Simulation for two different threshold values (generation=1000 and popula- tion =200 and scheme 3).

Threshold Signal to Noise Ratio (SNR) bit per pixel (bpp) 20 db 25.60 db 0.53 25 db 28.54 db 0.88

Finally The compressed picture is compared with the result of compression by Neural Networks [12] and J P E G approaches [13] which is shown in Figure 5.

The figure shows tha t E H W is very competi t ive with NNs and J P E G com- pression and by further improvement in our system which is not very difficult we expect much bet ter performance than NNs and J P E G approaches. We are working on variable block size for da ta compression and we find a much bet ter performance which we will report it later.

6 Discuss ions

The previous sections show tha t E H W has a good capability for data compres- sion and the simulation results predict a close performance compare with other methods in data compression. However these results definitely must be improved and the current investigations are aiming at produce results far bet ter than be- fore.

Another point is real t ime characteristic of this method. At the moment E H W is applied to static image pictures but later it will be applied to real t ime compression for motion image pictures. Using the ability of hardware compres- sion by EHW and by establishing a mechanism for real t ime compression we should be able to compete with MPEG.

We are working on a new version of this approach based on variable block size. I t has shown a good increase in the performance of our approach which we will report it later.

7 Conclusions

This research shows tha t the E H W can be applied to data compression and the results of simulations show good performance of E H W in compression. Several configurations are tested to find op t imum system for E H W for da ta compression and more tests are required to produce a complete system with high perfor- mance. This system is capable of applying in real t ime for motion pictures and give us hope for a wide spread application of the evolvable hardware in current technology.

Page 12: [Lecture Notes in Computer Science] Evolvable Systems: From Biology to Hardware Volume 1259 || Data compression based on Evolvable hardware

178

A B

C 4 D

Fig. 5. Comparison between EHW, NNs and JPEG compression systems. A) Orig- inal Lenna image. B) JPEG compression (bpp=0.5, SNR=26.5 db). C) NN compression (bpp=0.7, SNR=26.95). D) EHW compression (SNR--28.54, bpp=0.88).

R e f e r e n c e s

1. Hemmi H., Mizoguchi J., and Shimohaxa K., "Development and Evolution of Hard- ware Behaviors", Proceedings of Artificial Life IV, MIT Press, 1994.

2. Salami M. and Cain G., "Adaptive Hardware Optimization Based on Genetic Al- gorithms', Proceedings of The Eighth International Conference on Industrial Ap- plication of Artificial Intelligence L; Expert Systems (IEA95AIE) ~ Melbourne, Australia, June 1995, pp. 363-371_

3. Higuchi T. et al., "Evolvable Hardware and its Applications to Pattern Recogni- tion and Fault-tolerant Systems", Proceedings of the First International Workshop

4 This is a copy from Lena image in [12]

Page 13: [Lecture Notes in Computer Science] Evolvable Systems: From Biology to Hardware Volume 1259 || Data compression based on Evolvable hardware

179

Toward Evolvable Hardware, Lausanne, Switzerland, Lecturer Notes in Computer Science, Spring Verlag, 1995.

4. Higuchi T. et al., "Evolvable Hardware", in Massively Parallel Artificial Intelli- gence, edited by Kitano H. and Hendler J., pp.398-421, MIT Press, 1994.

5. Marchal P. et al., "Embryological Development on Silicon", Proceedings of Artifi- cial Life IV, MIT Press, 1994.

6. Murakawa M. et al., ~' Hardware Evolution at Function Level", Proceeding of Par- allel Problem Solving from Nature (PPSN) 1996.

7. Li J., and Manikopoulos C.N., "Nonlinear Prediction in Image Coding with DPCM", Electronics Letters, Vol. 26, No. 17, "kugust 1990, pp. 1357-1359.

8. Kuroki N., Nomura T., Tomita M., and Hirano, K., "Lossless Image Compression by Two-Dimensional Linear Prediction with Variable Coefficients", IEICE Trans- action on Fundamentals, Vol. E75-A, No. 7, July 1992, pp. 882-889.

9. Tekalp A.M., Kaufman H., and Woods J.W., "Fast Recursive Estimation of the Parameters of a Space-Varying Autoregressive Image Model", IEEE Transactions on Acoustics, Speech and Signal Processing, Vol. ASSP-33, No. 2, April 1985, pp. 469-472.

10. Wallace G.K., "The JPEG Still Picture Compression Standards", Communication of ACM, Vol. 34, No. 4, April 1991, pp. 30-44.

11. Dukhovich I.J., "A DPCM System Based on a Composite Image Model", IEEE Transactions on Communications, Vol. 31, No. 8, April 1983, pp. 1003-1017.

12. Parodi G., and Passaggio F., "Size-Adaptive Neural Network for Image Compres- sion", Proceedings of the first International Conference on Image Processing 1994 (ICIP94), IEEE Computer Society Press, Vol. 3, pp. 945-7.

13. Lane T., The Independent Group Public Domain JPEG, Shareware Softeware~ 1996.