Transcript

Image Processingby Three-Layer Cellular Neural Networks

with a New Layer ArrangementMuhammad Izzat bin Mohd Idrus, Yoshihiro Kato,

Yoko Uwate and Yoshifumi NishioDept. Electrical and Electronic Eng.,

Tokushima University2-1 Minami-Josanjima, Tokushima, Japan

Email: izzat, kkato, uwate, [email protected]

Abstractβ€” In this study, we research a new layer arrangementof three layer cellular neural network (CNN). In this paper,we investigate the output characteristics by using our proposedmethod to image processing of gray scale image and binary imageand show its effectiveness with simulation results.

I. INTRODUCTION

Cellular Neural Networks (CNN) were introduced by Chuaand Yang in 1988 [1][2]. The idea of the CNN was inspiredfrom the architecture of the cellular automata and the neuralnetworks. The CNN has local connectivity property and theyhave been successfully developed in various image processingapplications. Compare to the conventional neural networks,the CNN has been researched in various ways such as imageprocessing and pattern recognition. In previous study of CNN,single-layer CNN have been introduced with many types oftemplates and after that, two-layer CNN have been proposedfor high performance processing [3]. Recently, to obtain highperformance in color image processing, three-layer CNN havebeen proposed [4].

In this study, we are using a new layer arrangement of three-layer CNN by using three types of templates. We investigatethe output characteristics between the conventional CNN, two-layer CNN and the proposed CNN by apply to the binary im-age and gray-scale image to remove noise and edge detectionsimultaneously. In this paper, we describe about conventionalCNN, two-layer CNN, three-layer CNN and proposed CNN inSec. II. In Sec. III, we show simulation results of conventionalCNN, two-layer CNN and proposed CNN respectively. In Sec.III also we compare between all types of structure CNN.Finally, we conclude our study in Sec. IV.

II. DEVELOPMENT OF THE STRUCTURE CNN

In this section, we explain about the development and thecharacteristics of the structure CNNs which are conventionalCNN, two-layer CNN, three-layer CNN and our proposedstructure.

A. Conventional CNN

Conventional CNN is the first CNN proposed in [1] andonly had single layer. This means that the conventional CNNhas only one type of process can be done in one process. Forsome image processing, two or more processes have to bedone to obtain the result.

B. Two-layer CNN

To obtain better result in some image processing, two-layerCNN has been introduced [3]. Two-layer CNN is combinationof two single-layer CNNs. The two single-layer CNNs areconnected by using coupling templates 𝐢1 and 𝐢2 which areused to connect and transferred the data between first andsecond layers. Compare to the conventional CNN, two-layerCNN can be used two or more templates in one process. Inour previous study [3], the two-layer CNN has been confirmedto have more efficient than single-layer CNN for some imageprocessing.

C. Three-layer CNN

To overcome and obtain better results in color image pro-cessing, three-layer CNN have been proposed in our previousstudy [4]. There also have been several types of three-layerCNN. Like the two-layer CNN, to transfer between everysingle layers, three single CNNs are connected by using threecoupling templates 𝐢1, 𝐢2 and 𝐢3. In our previous study [4],three-layer CNN have been confirmed to have efficient resultsin color image processing.

D. Proposed CNN

In this study, we introduce a new structure of the CNN. Theblock diagram of the proposed CNN is shown in Fig .1.

Figure 1 shows that the output of the first layer CNN(CNN1) is delivered into the second layer CNN (CNN2) andthe third layer CNN (CNN3) by through coupling template(𝐢1). In other way, the outputs of the CNN2 and the CNN3are delivered into the CNN1 by through coupling templates(𝐢2) and (𝐢3), respectively. In the previous study of three-layer CNN [3], all three single-layer CNNs have mutual

978-1-4673-5762-3/13/$31.00 Β©2013 IEEE 2315

Fig. 1. Block Diagram of the proposed CNN.

connection between them. However, in our proposed CNN,CNN2 and CNN3 have not directly interacted between eachlayer. This arrangement is the different point between theprevious structure of three-layer CNN and the proposed CNN.

The equations of the proposed CNN are described as below:(1) State equations of the proposed CNN:State equation of CNN1 :

𝐢𝑑𝑣π‘₯1𝑖𝑗𝑑𝑑

= βˆ’ 1

𝑅𝑣π‘₯1𝑖𝑗 +

𝑖+π‘Ÿβˆ‘π‘˜=π‘–βˆ’π‘Ÿ

𝑗+π‘Ÿβˆ‘π‘™=π‘—βˆ’π‘Ÿ

𝐴1(𝑖, 𝑗; π‘˜, 𝑙)𝑣𝑦1π‘˜π‘™(𝑑)

+

𝑖+π‘Ÿβˆ‘π‘˜=π‘–βˆ’π‘Ÿ

𝑗+π‘Ÿβˆ‘π‘™=π‘—βˆ’π‘Ÿ

𝐡1(𝑖, 𝑗; π‘˜, 𝑙)𝑣𝑒1π‘˜π‘™(𝑑)

+

𝑖+π‘Ÿβˆ‘π‘˜=π‘–βˆ’π‘Ÿ

𝑗+π‘Ÿβˆ‘π‘™=π‘—βˆ’π‘Ÿ

𝐢2(𝑖, 𝑗; π‘˜, 𝑙)𝑣𝑦2π‘˜π‘™(𝑑)

+

𝑖+π‘Ÿβˆ‘π‘˜=π‘–βˆ’π‘Ÿ

𝑗+π‘Ÿβˆ‘π‘™=π‘—βˆ’π‘Ÿ

𝐢3(𝑖, 𝑗; π‘˜, 𝑙)𝑣𝑦3π‘˜π‘™(𝑑) + 𝐼1

(βˆ£π‘–βˆ’ π‘˜βˆ£ ≀ 1 , βˆ£π‘— βˆ’ π‘™βˆ£ ≀ 1). (1)

State equation of CNN2 :

𝐢𝑑𝑣π‘₯2𝑖𝑗𝑑𝑑

= βˆ’ 1

𝑅𝑣π‘₯2𝑖𝑗 +

𝑖+π‘Ÿβˆ‘π‘˜=π‘–βˆ’π‘Ÿ

𝑗+π‘Ÿβˆ‘π‘™=π‘—βˆ’π‘Ÿ

𝐴2(𝑖, 𝑗; π‘˜, 𝑙)𝑣𝑦2π‘˜π‘™(𝑑)

+

𝑖+π‘Ÿβˆ‘π‘˜=π‘–βˆ’π‘Ÿ

𝑗+π‘Ÿβˆ‘π‘™=π‘—βˆ’π‘Ÿ

𝐡2(𝑖, 𝑗; π‘˜, 𝑙)𝑣𝑒2π‘˜π‘™(𝑑)

+

𝑖+π‘Ÿβˆ‘π‘˜=π‘–βˆ’π‘Ÿ

𝑗+π‘Ÿβˆ‘π‘™=π‘—βˆ’π‘Ÿ

𝐢1(𝑖, 𝑗; π‘˜, 𝑙)𝑣𝑦1π‘˜π‘™(𝑑) + 𝐼2

(βˆ£π‘–βˆ’ π‘˜βˆ£ ≀ 1 , βˆ£π‘— βˆ’ π‘™βˆ£ ≀ 1). (2)

State equation of CNN3 :

𝐢𝑑𝑣π‘₯3𝑖𝑗𝑑𝑑

= βˆ’ 1

𝑅𝑣π‘₯3𝑖𝑗 +

𝑖+π‘Ÿβˆ‘π‘˜=π‘–βˆ’π‘Ÿ

𝑗+π‘Ÿβˆ‘π‘™=π‘—βˆ’π‘Ÿ

𝐴3(𝑖, 𝑗; π‘˜, 𝑙)𝑣𝑦3π‘˜π‘™(𝑑)

+

𝑖+π‘Ÿβˆ‘π‘˜=π‘–βˆ’π‘Ÿ

𝑗+π‘Ÿβˆ‘π‘™=π‘—βˆ’π‘Ÿ

𝐡3(𝑖, 𝑗; π‘˜, 𝑙)𝑣𝑒3π‘˜π‘™(𝑑)

+

𝑖+π‘Ÿβˆ‘π‘˜=π‘–βˆ’π‘Ÿ

𝑗+π‘Ÿβˆ‘π‘™=π‘—βˆ’π‘Ÿ

𝐢1(𝑖, 𝑗; π‘˜, 𝑙)𝑣𝑦1π‘˜π‘™(𝑑) + 𝐼3

(βˆ£π‘–βˆ’ π‘˜βˆ£ ≀ 1 , βˆ£π‘— βˆ’ π‘™βˆ£ ≀ 1). (3)

(2) Output equations of the proposed CNN:Output equation of CNN1 :

𝑣𝑦1𝑖𝑗(𝑑) =1

2(βˆ£π‘£π‘₯1𝑖𝑗(𝑑) + 1∣ βˆ’ βˆ£π‘£π‘₯1𝑖𝑗(𝑑)βˆ’ 1∣). (4)

Output equation of CNN2 :

𝑣𝑦2𝑖𝑗(𝑑) =1

2(βˆ£π‘£π‘₯2𝑖𝑗(𝑑) + 1∣ βˆ’ βˆ£π‘£π‘₯2𝑖𝑗(𝑑)βˆ’ 1∣). (5)

Output equation of CNN3 :

𝑣𝑦3𝑖𝑗(𝑑) =1

2(βˆ£π‘£π‘₯3𝑖𝑗(𝑑) + 1∣ βˆ’ βˆ£π‘£π‘₯3𝑖𝑗(𝑑)βˆ’ 1∣). (6)

III. SIMULATION RESULTS

In this section, we show simulation results for theconventional CNN, the two-layer CNN and the proposedCNN. For simulations, we use three types of templateswhich are β€œπ‘†π‘šπ‘Žπ‘™π‘™ π‘œπ‘π‘—π‘’π‘π‘‘ π‘Ÿπ‘’π‘šπ‘œπ‘£π‘’π‘Ÿβ€, β€œπΈπ‘‘π‘”π‘’ π‘‘π‘’π‘‘π‘’π‘π‘‘π‘–π‘œπ‘›β€ andβ€œπ‘†π‘šπ‘œπ‘œπ‘‘β„Žπ‘–π‘›π‘”β€. All of these original templates can be foundin [5].

β€œπ‘†π‘šπ‘Žπ‘™π‘™ π‘œπ‘π‘—π‘’π‘π‘‘ π‘Ÿπ‘’π‘šπ‘œπ‘£π‘’π‘Ÿβ€

𝐴1 =

[1 1 11 2 11 1 1

], 𝐡1 =

[0 0 00 0 00 0 0

], 𝐼1 = 0. (7)

β€œπΈπ‘‘π‘”π‘’ π‘‘π‘’π‘‘π‘’π‘π‘‘π‘–π‘œπ‘›β€

𝐴1 =

[0 0 00 1 00 0 0

], 𝐡1 =

[ βˆ’1 βˆ’1 βˆ’1βˆ’1 8 βˆ’1βˆ’1 βˆ’1 βˆ’1

], 𝐼1 = βˆ’1. (8)

β€œπ‘†π‘šπ‘œπ‘œπ‘‘β„Žπ‘–π‘›π‘”β€

𝐴1 =

[0 1 01 2 10 1 0

], 𝐡1 =

[0 0 00 0 00 0 0

], 𝐼1 = 0. (9)

(i) For the conventional CNN, we use two types oforiginal templates which are β€œπ‘†π‘šπ‘Žπ‘™π‘™ π‘œπ‘π‘—π‘’π‘π‘‘ π‘Ÿπ‘’π‘šπ‘œπ‘£π‘’π‘Ÿβ€and β€œπΈπ‘‘π‘”π‘’ π‘‘π‘’π‘‘π‘’π‘π‘‘π‘–π‘œπ‘›β€. The feature of the templateβ€œπ‘†π‘šπ‘Žπ‘™π‘™ π‘œπ‘π‘—π‘’π‘π‘‘ π‘Ÿπ‘’π‘šπ‘œπ‘£π‘’π‘Ÿβ€ is delete small noises in imagesand the feature of the template β€œπΈπ‘‘π‘”π‘’ π‘‘π‘’π‘‘π‘’π‘π‘‘π‘–π‘œπ‘›β€ is to detectthe edge of the object.(ii) For the two-layer CNN, the templates are designed asbelow. This idea was inspired from the previous study [3].

π‘‡π‘’π‘šπ‘π‘™π‘Žπ‘‘π‘’ π‘œπ‘“ 𝐢𝑁𝑁1 :

𝐴1 =

[1 1 11 2 11 1 1

], 𝐡1 =

[0 0 00 0 00 0 0

], 𝐼1 = 0. (10)

π‘‡π‘’π‘šπ‘π‘™π‘Žπ‘‘π‘’ π‘œπ‘“ 𝐢𝑁𝑁2 :

𝐴2 =

[0 0 00 1 00 0 0

], 𝐡2 =

[0 0 00 0 00 0 0

], 𝐼2 = βˆ’1. (11)

πΆπ‘œπ‘’π‘π‘™π‘–π‘›π‘” π‘‡π‘’π‘šπ‘π‘™π‘Žπ‘‘π‘’π‘  :

𝐢1 =

[ βˆ’1 βˆ’1 βˆ’1βˆ’1 8 βˆ’1βˆ’1 βˆ’1 βˆ’1

], 𝐢2 =

[0 0 00 1 00 0 0

], (12)

(iii) For the proposed CNN, the templates are designed asbelow. We add β€œπ‘†π‘šπ‘œπ‘œπ‘‘β„Žπ‘–π‘›π‘”β€ in our designed templates,because was shown to be effective in two-layer CNN for edgedetection. For designed coupling templates, we designed 𝐢1 astemplate β€œπΈπ‘‘π‘”π‘’ π‘‘π‘’π‘‘π‘’π‘π‘‘π‘–π‘œπ‘›β€ operation. For β€œπΆ2”, we designedas π‘π‘Ÿπ‘–π‘‘π‘”π‘’ to transfer data which was output from the CNN2.For β€œπΆ3”, it is obtained by experienced. At this moment, westill cannot explain in scientifically.

2316

π‘‡π‘’π‘šπ‘π‘™π‘Žπ‘‘π‘’ π‘œπ‘“ 𝐢𝑁𝑁1 :

𝐴1 =

[1 1 11 2 11 1 1

], 𝐡1 =

[0 0 00 0 00 0 0

], 𝐼1 = 0. (13)

π‘‡π‘’π‘šπ‘π‘™π‘Žπ‘‘π‘’ π‘œπ‘“ 𝐢𝑁𝑁2 :

𝐴2 =

[0 0 00 1 00 0 0

], 𝐡2 =

[0 0 00 0 00 0 0

], 𝐼2 = βˆ’1. (14)

π‘‡π‘’π‘šπ‘π‘™π‘Žπ‘‘π‘’ π‘œπ‘“ 𝐢𝑁𝑁3 :

𝐴3 =

[0 1 01 2 10 1 0

], 𝐡3 =

[0 0 00 0 00 0 0

], 𝐼3 = βˆ’0.5. (15)

πΆπ‘œπ‘’π‘π‘™π‘–π‘›π‘” π‘‡π‘’π‘šπ‘π‘™π‘Žπ‘‘π‘’π‘  :

𝐢1 =

[ βˆ’1 βˆ’1 βˆ’1βˆ’1 8 βˆ’1βˆ’1 βˆ’1 βˆ’1

], 𝐢2 =

[0 0 00 1 00 0 0

],

𝐢3 =

[0 0 00 βˆ’2 00 0 0

]. (16)

Remark: In this study, we did not simulate the three-layerCNN [4] because we did not obtain good designed templates.

(a)

(b) (c) (d)Fig. 2. Simulation results for binary images. (a) Initial image. (b) Output ofthe conventional CNN. (c) Output of the two-layer CNN. (d) Output of theproposed CNN.

Figure 2 shows the simulation results of binary images.Figure 2(a) shows an initial image which is a black squarewith noises. Figures 2(b), (c) and (d) show the outputs ofthe conventional CNN, the two-layer CNN and the proposedCNN for binary images, respectively. In Figs. 2(b), (c) and(d), the outputs show the edge of square can be detected. Innoise removalβ€˜s performance, the outputs of the conventionalCNN and proposed CNN show noises can be removed com-pletely. However, the output of the two-layer CNN cannotremove noises completely. The results show that two-layerCNN cannot perform effectively in some image processing.From these simulation results, the conventional CNN and theproposed CNN show better results than the two-layer CNN.

However, the conventional CNN needs two processes to obtainthe desired result compare to the proposed CNN. First process,we apply β€œπ‘†π‘šπ‘Žπ‘™π‘™ π‘œπ‘π‘—π‘’π‘π‘‘ π‘Ÿπ‘’π‘šπ‘œπ‘£π‘’π‘Ÿβ€ to remove the noises.Then, for the second process, we apply β€œπΈπ‘‘π‘”π‘’ π‘‘π‘’π‘‘π‘’π‘π‘‘π‘–π‘œπ‘›β€to detect the edge of square. From these conclusions, we cansay that the proposed CNN exhibited excellent performancethan the conventional CNN and the two-layer CNN in thesesimulations.

Remark: We did not use template β€œπ‘†π‘šπ‘œπ‘œπ‘‘β„Žπ‘–π‘›π‘”β€ in the con-ventional CNN and the two-layer CNN because less effective.

(a)

(b) (c) (d)Fig. 3. Simulation results for gray-scale images. (a) Initial image. (b) Outputof the conventional CNN. (c) Output of the two-layer CNN. (d) Output of theproposed CNN.

Figure 3 shows the simulation results of gray-scale images.Figure 3(a) shows an initial image which is a black circlewith noises. Figures 3(b), (c) and (d) show the outputs ofthe conventional CNN, the two-layer CNN and the proposedCNN for gray-scale images, respectively. In Figs. 3(b), (c)and (d), the outputs show that the edge of circle can bedetected. In noise removal’s performance, although all of theoutputs cannot remove noises completely, the outputs of theconventional CNN and the proposed CNN show slightly betterthan the two-layer CNN. These results also show that thetwo-layer CNN cannot perform effectively in some imageprocessing. For the outputs of the conventional CNN and theproposed CNN, even the simulation results show almost thesame, the output of the proposed CNN shows hardly betterthan the conventional CNN if we look the results in detail.Additionally, like the simulation for binary image in Fig. 2, theconventional CNN needs two processes to complete the task.For this simulation of gray-scale image, we did not say thatthe proposed CNN show greatly better performance than theconventional CNN and the two-layer CNN. However, we cansay that proposed CNN shows slightly better than conventionalCNN and two-layer CNN.

2317

IV. CONCLUSION

In this study, we have proposed a new layer arrangementof three-layer CNN. In the proposed CNN, we use only onesingle layer CNN to connect with the other two single layerCNNs. From the computer simulations of simple images forbinary image and gray-scale image, we investigated the outputcharacteristics of the conventional CNN, two-layer CNN andproposed CNN. We can say that the proposed CNN havepotential in image processing. However, at the moment, we donot say that our proposed CNN show excellent performancethat conventional CNN and two-layer CNN.

In future works, we would like to use another type of imageslike an image which has scenery. We also would like to useanother type of templates to apply in our proposed CNN.

ACKNOWLEDGMENT

This work was partly supported by JSPS Grant-in-Aid forScientific Research 22500203.

REFERENCES

[1] L. O. Chua and L. Yang, β€œCellular Neural Networks: Theory,” IEEETrans. Circuits Syst., vol. 32, pp. 1257-1272, Oct. 1988.

[2] L. O. Chua and L. Yang, β€œCellular Neural Networks: Applications,” IEEETrans. Circuits Syst., vol. 32, pp. 1257-1272, Oct. 1988.

[3] Z. Yang, Y. Nishio and A. Ushida, β€œImage Processing of Two-LayerCNNs:Applications and Their Stability,” IEICE Trans. Fundamentals,vol. E85-A, No. 9, pp. 2052-2060, Sep. 2002.

[4] T. Inoue and Y. Nishio, β€œResearch of Three-Layer Cellular Neural Net-works Processing Color Images,” IEICE Technical Report, NLP2008-25,pp. 33-36, Jul. 2008.

[5] Cellular Sensory Wave Computers Laboratory Computer and AutomationResearch Institute Hungarian Academy of Sciences, β€œCellular waveComputing Library (Template, Algorithm, and Programs) Version 2.1”

2318


Recommended