12
ISSN 0005-1179, Automation and Remote Control, 2013, Vol. 74, No. 9, pp. 1474–1485. © Pleiades Publishing, Ltd., 2013. Original Russian Text © P.Sh. Geidarov, 2013, published in Avtomatika i Telemekhanika, 2013, No. 9, pp. 53–67. SYSTEM ANALYSIS AND OPERATIONS RESEARCH Multitasking Application of Neural Networks Implementing Metric Methods of Recognition P. Sh. Geidarov Institute of Cybernetics, Azerbaijan National Academy of Sciences, Baku, Azerbaijan Received February 5, 2012 Abstract—The model of a neural network based on metric methods of recognition represents the architecture of a neural network implementing metric methods of recognition. In such networks, the number of neurons, layers and connections, as well as the values of weights can be defined analytically using the initial conditions of a problem (the number of images, templates and attributes). The feasibility of defining network parameters and architecture allows rapid implementation of a network in the case of multitasking application. Finally, we consider multitasking application of neural networks based on metric methods of recognition under different conditions with separately computed classifiers. DOI: 10.1134/S000511791309004X 1. INTRODUCTION In artificial intelligence systems design, neural networks form a leading direction of research. The underlying reason is that neural networks simulate the principles of biological mentation mecha- nisms. Despite the popularity of artificial intelligence systems, their designers face a series of complexities. For instance, the absence of exact mathematical expressions and algorithms defining necessary structural parameters of a network for a specific problem (the number of neurons, layers and connections) leads to the following. In many cases, network parameters are chosen by multiple experiments and mistakes; this appreciably complicates solution of a posed problem. Furthermore, such approach does not guarantee the optimal combination of network parameters. In models with learning (e.g., the perceptron model), the values of weights are determined through learning algo- rithms [1–3] that require much execution time and sufficiently large sizes of a learning sample. The results of learning appear variable and depend on a learning sample, the conditions of a learning algorithm and (what is important!) on the structural parameters of a network. Learning process can be accompanied by well-known problems (network paralysis or local minimum attainment by a network [3]). Consequently, additional learning becomes necessary (with preliminary possible modifications in the conditions of a learning algorithm and structural parameters of a network). The paper [4] suggested the architecture of the neural network model based on metric methods of recognition. In this context, we note the following. Metric methods of recognition include methods, where recognition procedure involves the values of some rate (a closeness measure) characterizing the distance between a recognized object and a set of separated samples (templates) of a recognized image (a class) in a certain known reference system. The minimal value of such measure determines the corresponding template image [5] under the assumption that the images from the same class are closer to each other than the images from different classes. The proposed architecture of a neural network [4] enables analytic definition of network pa- rameters, namely, the number of neurons, layers and connections, using the initial conditions of a problem. In the presence of separated templates, the values of weights w i are defined analytically by closeness characteristics adopted in metric methods of recognition. Therefore, the architecture of given networks belongs to the class of definable networks, e.g., Hopfield networks [1–3, 6, 7]. In 1474

Multitasking application of neural networks implementing metric methods of recognition

  • Upload
    p-sh

  • View
    217

  • Download
    3

Embed Size (px)

Citation preview

Page 1: Multitasking application of neural networks implementing metric methods of recognition

ISSN 0005-1179, Automation and Remote Control, 2013, Vol. 74, No. 9, pp. 1474–1485. © Pleiades Publishing, Ltd., 2013.Original Russian Text © P.Sh. Geidarov, 2013, published in Avtomatika i Telemekhanika, 2013, No. 9, pp. 53–67.

SYSTEM ANALYSIS AND OPERATIONS RESEARCH

Multitasking Application of Neural Networks

Implementing Metric Methods of Recognition

P. Sh. Geidarov

Institute of Cybernetics, Azerbaijan National Academy of Sciences, Baku, Azerbaijan

Received February 5, 2012

Abstract—The model of a neural network based on metric methods of recognition representsthe architecture of a neural network implementing metric methods of recognition. In suchnetworks, the number of neurons, layers and connections, as well as the values of weightscan be defined analytically using the initial conditions of a problem (the number of images,templates and attributes). The feasibility of defining network parameters and architectureallows rapid implementation of a network in the case of multitasking application. Finally, weconsider multitasking application of neural networks based on metric methods of recognitionunder different conditions with separately computed classifiers.

DOI: 10.1134/S000511791309004X

1. INTRODUCTION

In artificial intelligence systems design, neural networks form a leading direction of research. Theunderlying reason is that neural networks simulate the principles of biological mentation mecha-nisms. Despite the popularity of artificial intelligence systems, their designers face a series ofcomplexities. For instance, the absence of exact mathematical expressions and algorithms definingnecessary structural parameters of a network for a specific problem (the number of neurons, layersand connections) leads to the following. In many cases, network parameters are chosen by multipleexperiments and mistakes; this appreciably complicates solution of a posed problem. Furthermore,such approach does not guarantee the optimal combination of network parameters. In models withlearning (e.g., the perceptron model), the values of weights are determined through learning algo-rithms [1–3] that require much execution time and sufficiently large sizes of a learning sample. Theresults of learning appear variable and depend on a learning sample, the conditions of a learningalgorithm and (what is important!) on the structural parameters of a network. Learning processcan be accompanied by well-known problems (network paralysis or local minimum attainment bya network [3]). Consequently, additional learning becomes necessary (with preliminary possiblemodifications in the conditions of a learning algorithm and structural parameters of a network).

The paper [4] suggested the architecture of the neural network model based on metric methods ofrecognition. In this context, we note the following. Metric methods of recognition include methods,where recognition procedure involves the values of some rate (a closeness measure) characterizingthe distance between a recognized object and a set of separated samples (templates) of a recognizedimage (a class) in a certain known reference system. The minimal value of such measure determinesthe corresponding template image [5] under the assumption that the images from the same classare closer to each other than the images from different classes.

The proposed architecture of a neural network [4] enables analytic definition of network pa-rameters, namely, the number of neurons, layers and connections, using the initial conditions of aproblem. In the presence of separated templates, the values of weights wi are defined analyticallyby closeness characteristics adopted in metric methods of recognition. Therefore, the architectureof given networks belongs to the class of definable networks, e.g., Hopfield networks [1–3, 6, 7]. In

1474

Page 2: Multitasking application of neural networks implementing metric methods of recognition

MULTITASKING APPLICATION OF NEURAL NETWORKS 1475

contrast to these networks, the networks under consideration are not recursive (possess no feedback)and do not enjoy instability (being intrinsic to the above class of networks). The separated set oftemplates being absent, one can separate templates by certain algorithms of preliminary templateselection from a learning sample. Another algorithm [4] consists in selection of the minimal possibleset of templates (taking into account the existing sample).

2. BASIC PRINCIPLES OF NEURAL NETWORK ARCHITECTURE

Consider the basic principles of neural network architecture formulated in [4], with regard tothe image recognition problem for two curves. Figure 1a demonstrates fragments of curves 1 and 2accepted as template ones. In the elementary case, the closeness measure Sn1,2,3 for an arbitrary

Weight Table

j

157

6

5

4

3

2

1

3

–3

–25

–15

–5

5

15

2

24 15 8

3

12 5 0

0 –5 –8–12 –15

–24

–32

–24

1

1 2 3 4 5

i

x

5,7

w

5,7

x

5,6

w

5,6

......

......

x

1,1

w

1,1

x

1,2

w

1,2

x

i

,

j

w

i

,

j

(a)

Neuron

Sn f

(

Sn

)

1 2 3 4 51234567

j

i

1 0 0 00

–13 0 –1 0 1–2 0 1 1–1 –1 0 2–1 1 1 4

1 2 5 8

(

i

p

,

j

p

)

(

i

p

,

j

p

)

(

i

2(min)

,

j

2(min)

)

(

i

p

,

j

p

)

(

i

1min)

,

j

1min)

)

d

1

d

2

(b)

(c) (d)

Fig. 1. (a) The weight table for curves; (b) the derivation block diagram for one neuron; (c) the distances d1and d2 for the point (ip, jp); (d) the weight table for templates 2 and 7.

tested (input) curve (see curve 3 in Fig. 1a) can be evaluated as

Sn1,2,3 =M∑

i=1

(j(i)1 − j

(i)3 )2 −

M∑

i=1

(j(i)2 − j

(i)3 )2. (1)

Here j(i)1 , j

(i)2 , and j

(i)3 indicate the values of curves 1, 2, and 3, respectively, at reference points on

axis j; M specifies the number of reference points on axis i (in Fig. 1a, M = 5). For a reference

point of curve 3 (e.g., i = 3, see Fig. 1a), the value Sn(i)1,2,3 follows from (1) under M = 1:

Sn(3)1,2,3 = (1− 5)2 − (7− 5)2 = 12.

Imagine that the image in Fig. 1a has been divided into cells (ij ). By analogy, for each cell introduceweights wij , where i and j denote the numbers of a cell with respect to axis i and j, respectively:

wij = (j(i)1 − j(i))2 − (j

(i)2 − j(i))2. (2)

AUTOMATION AND REMOTE CONTROL Vol. 74 No. 9 2013

Page 3: Multitasking application of neural networks implementing metric methods of recognition

1476 GEIDAROV

Using the expressions (1) and (2), it is possible to transform Sn1,2,3 into an expression defining thevalue of the neuron Sn1,2 (see Fig. 1b):

Sn1,2 =M∑

i=1

K∑

j=1

xij((j

(i)1 − j(i))2 − (j

(i)2 − j(i))2

)=

M∑

i=1

K∑

j=1

xijwij. (3)

In this formula, xij ∈ {0, 1} designates the activity of cell (ij ) for an arbitrary tested curve 3 by thefollowing condition. If the curve passes this cell, then xij = 1; otherwise, xij = 0. The quantities Mand K correspond to the number of cells by axes i and j, respectively (in Fig. 1a, we have M = 5and K = 7). Obviously, the cells with values wij < 0 are closer to template curve 1, whereas thecells with values wij > 0 are closer to template curve 2. In combination with (3), this fact leads tothe conditions on the neuron activation function f(Sn1,2) in Fig. 1b:

f(Sn1,2) = 1, if Sn1,2 < 0,

f(Sn1,2) = 0, if Sn1,2 > 0.(4)

According to (4), the neuron outputs the closest curve (unity corresponds to curve 1 and zerocorresponds to curve 2).

One can employ alternative closeness measures. For instance, consider the recognition problemfor graphic symbols. Here the closeness measure represents the minimal geometric distance (seeFig. 1c) to the closest point of a template image loop according to the expressions (5) and (6). Thecoordinates of cells in the weight table may serve as the coordinates of image points.

In Fig. 1c, the symbols d1 and d2 designate the minimal distances from the point (ip, jp) to the

loops of image templates, the points (i(min)1 , j

(min)1 ) and (i

(min)2 , j

(min)2 ). Accordingly, the weight wij

of cell (ij ) in the weight table (see Fig. 1d) satisfies the formula

wij = d21 − d22, (5)

where

d21 =(i(min)1 − ip

)2+

(j(min)1 − jp

)2,

d22 =(i(min)2 − ip

)2+

(j(min)2 − jp

)2.

(6)

As the points (i(min)1 , j

(min)1 ) and (i

(min)2 , j

(min)2 ), one can take the closest coordinates of active cells

in the image table of symbols “7” and “2” to a cell with the coordinates (ip, jp). For example,consider Fig. 1d; a cell with coordinates (4, 2) has the closest cells (4, 1) (for symbol “2”) and(3, 3) (for symbol “7”). Consequently, the value wij equals

w4,2 = ((3− 4)2 + (3− 2)2)− ((4 − 4)2 + (1− 2)2) = 1.

Similarly, one easily evaluates the weight table for all cells (Fig. 1d).

Next, Fig. 2 provides the block diagram of a two-layer neural network [4] based on metricmethods of recognition in the recognition problem of N templates. In layer 1 of this network, each

pair of templates (Fig. 2a) corresponds to a neuron whose value Sn(1)k,m (k and m indicating the

numbers of templates) is given by the closeness measure of the method.

Construct the weight table for each neuron in layer 1. The activation function f(Sn(1)k,m) for a

layer 1 neuron (Fig. 2a) meets the following conditions:

f(Sn(1)k,m) = 1, if Sn

(1)k,m < 0,

f(Sn(1)k,m) = 0, if Sn

(1)k,m > 0.

(7)

AUTOMATION AND REMOTE CONTROL Vol. 74 No. 9 2013

Page 4: Multitasking application of neural networks implementing metric methods of recognition

MULTITASKING APPLICATION OF NEURAL NETWORKS 1477

1

2

1, 2

1, 3

1,

N

2, 3

2,

N

N–

2

N–

1

(

N

–1

)

,

N

N–

(

N–

1)

(a) (b)

N

Template 1

Template 2

Template 3

Template

N

Fig. 2. The neural network for N templates.

For a fixed template, the number of possible pairs of templates makes up N − 1. And so, theactivation function for layer 2 neurons is specified by

f(Sn(2)k ) = 1, if Sn

(2)k ≥ α(N − 1),

f(Sn(2)k ) = 0, if Sn

(2)k < α(N − 1).

(8)

Here α means any constant, and the value Sn(2)k for neuron k in layer 2 (see Fig. 2b) is defined by

Sn(2)k = α

⎝k−1∑

i=1

f̄(Sn(1)i,k ) +

N∑

i=k+1

f(Sn(1)k,i )

⎠ . (9)

In Fig. 2a, the general number of layer 1 neurons (denoted by nl1) can equal the number of com-binations C2

N :

nl1 = C2N =

N(N − 1)

2(10)

(i.e., the number of paired combinations of N templates). Alternatively, eliminate from layer 1 allneurons performing pairwise comparison of templates for one image (provided that Ntemp > N).In this case, nl1 can be evaluated as

nl1 =N−1∑

j=1

nj

⎝Ntemp −j∑

i=1

ni

⎠, (11)

where nj and ni mean the numbers of templates for images j and i, respectively, N stands for thenumber of recognized images, and Ntemp is the total number of templates.

AUTOMATION AND REMOTE CONTROL Vol. 74 No. 9 2013

Page 5: Multitasking application of neural networks implementing metric methods of recognition

1478 GEIDAROV

3. PROBLEM STATEMENT

The present paper aims at considering the architecture of neural networks based on metricmethods of recognition for multitasking application. We comprehend multitasking application asthe feasibility of using one network for different sets of templates in different problems. As isgenerally known, the multitasking application of neural networks can be implemented by a set ofclassifiers (a set of weight values) selected and employed for all problems in a network. Alternatively,it can be implemented by separately defined sets of classifiers for each problem. The deployment of auniform classifier seems optimal, yet restricts the number of possible problems. Indeed, the greateris the number of problems, the higher are the duration, the levels of complexity and instabilityof corresponding learning process. Accordingly, the process of choosing a uniform classifier forall problems gets more complicated. On the other hand, using a set of classifiers requires greatermemory space, but simplifies learning process and consumes less network resources. Actually,this imposes no limits on the number of treated problems. In contrast to conventional models oflinear networks [1–3, 6], involving a set of classifiers (in the case of neural networks implementingmetric methods of recognition) enables the following. In the presence of a template sample, it ispossible to adjust rapidly network parameters for each problem through preliminary computationof weights without traditional learning algorithms. Furthermore, the order and transparency of thestructure of a network implementing metric methods of recognition allow preserving the obtainedclassifiers under variations in structural parameters of the network (e.g., any changes in the numberof neurons). Traditional networks being adopted, any variations in structural parameters of anetwork call for relearning or additional learning and creation of new classifiers.

4. MODIFICATIONS IN NETWORK STRUCTURE IN THE CASEOF MULTITASKING APPLICATION

The expressions (10) and (11) define the number of layer 1 neurons and the total number ofevaluated weight tables in layer 1 of a neural network. In order to reduce the set of weight tablesand simplify the usage of a neural network in the case of multitasking application, layer 0 is addedto the network structure. It consists of linear neurons each corresponding to a template. Theiractivation functions coincide with the values of the neurons:

f(Sn(0)k ) = Sn

(0)k ,

where Sn(0)k gives the value of neuron k in layer 0, being described by formula (3):

Sn(1)1,2 =

M∑

i=1

K∑

j=1

xijwij,1,2 =M∑

i=1

K∑

j=1

xij(wij,1 − wij,2)

=M∑

i=1

K∑

j=1

xijwij,1 −M∑

i=1

K∑

j=1

xijwij,2 =Sn(0)1 − Sn

(0)2 .

(12)

Consequently,

Sn(0)k =

M∑

i=1

K∑

j=1

xijwij,k.

Here wij,k denotes the weight of a cell in column i and row j from weight table k, being defined bythe closeness measure of the method. For instance, consider the example in Fig. 1a; the value wij,k

can be specified by

wij,k =(j(i)k − j(i)

)2.

AUTOMATION AND REMOTE CONTROL Vol. 74 No. 9 2013

Page 6: Multitasking application of neural networks implementing metric methods of recognition

MULTITASKING APPLICATION OF NEURAL NETWORKS 1479

2

1

11 1 10

1 1 4 4 499944

9 9 16 16 1625252516163636362525

WT

2

WT

1

......

......

......

......

x

i

,

j

w

i

,

j

x

1,2

w

1,2

x

1,1

w

1,1

x

5,7

w

5,7

x

5,6

w

5,6

x

1,1

w

1,1

x

1,2

w

1,2

x

i

,

j

w

i

,

j

x

5,6

w

5,6

x

5,7

w

5,7

16 36 36 25 169 25 25 16 94 16 16 9 4

149910 4 4 1

100

011 1

Sn

2(0)

Sn

1(0)

w

2

=

1

w

1

= 1Sn

1,2(1)

f

(Sn

1,2(1)

)

Fig. 3. The partition of one neuron into two linear neurons.

1

2

1, 2

1,

N

2, 3

2,

N

N–

2

N–

1

(

N

–1

)

,

N

N–

(

N–

1)

(a) (b)

N

Template 1

Template 2

Template 3

Template

N

1

1, 32

3

(c)

N

Fig. 4. The block diagram of the neural network for N images with layer 0.

Two outputs of layer 0 neurons are supplied to the inputs of a layer 1 neuron (see Fig. 3);according to (12), their weights constitute w1 = 1 and w2 = −1, respectively. Finally, this preservesintegrity in operation of a layer 1 neuron (see formula (12)). The total number of weight tables(WTs) goes down as against the quantities evaluated by (10) and (11). The reduction equals thenumber of used templates Ntemp or the number of images N (in the case of Ntemp = N).

In Fig. 4, each neuron i in layer 0 corresponds to output i of the neural network, and eachtemplate corresponds to a layer 0 neuron. This enables introducing some modifications in theimage of a template (a template set) without computing N − 1 weight tables (in contrast to theblock diagram in Fig. 2).

5. BLOCK DIAGRAMS OF MULTITASKING APPLICATION OF THE NETWORK

In the multitasking mode, the sets of weight tables are defined for each problem (e.g., seeFig. 5). In addition, for each problem, it is necessary to specify the output set Yout representing a

AUTOMATION AND REMOTE CONTROL Vol. 74 No. 9 2013

Page 7: Multitasking application of neural networks implementing metric methods of recognition

1480 GEIDAROV

Prob

lem

1

Prob

lem

2

WT 1

WT 2

WT 1

WT 2

1636

3616

259

2525

169

416

169

41

99

41

04

41

11

00

01

1

10

11

11

14

44

44

99

99

916

1616

2525

2536

3625

1616

2525

w1,

2w

1,1

....

....w

5,7

wi,j

w1,

2w

1,1

....

....w

5,7

wi,j

11

10

00

42

00

12

10

12

10

01

4

00

12

51

25

8

00

00

11

1

00

12

11

10

04

12

12 1

11

00

00

00

w1,

2w

1,1

....

....w

5,7

wi,j

w1,

2w

1,1

....

....w

5,7

wi,j

......

......

x 5,7w

5,7

x 5,6w

5,6

x 1,3w

1,1

x i,jwi,j

......

x 1,2w

1,2

x 1,1w

1,1

x 1,2w

1,2

x 5,7w

5,7

......

x i,jwi,j

x 5,6w

5,6

w2 =

–1

w1=

1

Sn1,2(1)

f(Sn1,2(1) )

Sn1(0

)

Sn2(0

)

Fig.5.Theapplicationoftheneu

ralnetwork

toproblemswithtw

otemplates.

AUTOMATION AND REMOTE CONTROL Vol. 74 No. 9 2013

Page 8: Multitasking application of neural networks implementing metric methods of recognition

MULTITASKING APPLICATION OF NEURAL NETWORKS 1481

Problem 1 WT

Y

out 1

WT

Y

out 2

WT

Y

out

N

Problem 2

Problem

N

NN

Y

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

Problem 1

Separationof problem

Problem

N

NN

WT

WTdist.im.

Y

out 1

WT

WTdist.im.

WT

WTdist.im.

Y

out

N

(1)

(3)

(3)

..

.. . .

(b)

Y

out

W

X

Y

N

Y

2

Y

1

(2)

(a)

Fig. 6. (a) The block diagram of the multitasking application of a neural network; (b) the block diagram of aneural network with preliminary separation of problem classifier.

set of recognized images of the problem. For example, in Fig. 5 we have Yout = {curve 1, curve 2}(problem 1) and Yout = {7, 2} (problem 2).

Next, Fig. 6a demonstrates the general block diagram of multitasking application of a neuralnetwork (NN) with differently defined classifiers. This figure matches the case when several param-eters for each problem (the number of used templates, the number of used attributes of a template,the number of recognized images) correspond to the parameters of a neural network (the numberof layer 0 neurons, the dimensions of weight tables, and the number of outputs in the network).

Consider network application in situations, where problem parameters and network parametersmismatch.

(1) Imagine that, for a certain problem, the dimensions of the weight table (m,k) is smallerthan the ones of the weight table (M,K) of the neural network (m < M,k < K). Then the unusedcolumns and rows of the table can be nullified (see Fig. 7a). According to (13), this procedurepreserves integrity in operation of a layer 0 neuron:

Sn(0) =M∑

i=1

K∑

j=1

xijwij =m∑

i=1

k∑

j=1

xijwij +M∑

i=m+1

K∑

j=k+1

xijwij

=m∑

i=1

k∑

j=1

xijwij +M∑

i=m+1

K∑

j=k+1

(xij × 0) =m∑

i=1

k∑

j=1

xijwij .

(13)

Here m and k mean the numbers of columns and rows in the weight table of the problem (in Fig. 7a,we have m = 3 and k = 4); M and K indicate the numbers of columns and rows in the weighttable of the neural network (in Fig. 7a, we have M = 5 and K = 7).

(2) Consider the case when, for a certain problem, the number of templates appears smaller thanthe number of layer 0 neurons. To preserve integrity in operation of the network, it is necessaryto create the weight table of the “distant image” (WTdist.im.) for inactive neurons in layer 0. The“distant image” is characterized by an appreciable distance to all other templates; the template ofthe “distant image” is generated based on the following rule. The comparison of any two templatesby the closeness measure adopted in the metric method of recognition appears smaller than theidentical comparison of any template with the template of the “distant image.” To fulfill this rule,

AUTOMATION AND REMOTE CONTROL Vol. 74 No. 9 2013

Page 9: Multitasking application of neural networks implementing metric methods of recognition

1482 GEIDAROV

1234

1 2 3

w

0,3

w

1,3

w

2,3

w

0,2

w

0,1

w

0,0

w

1,0

w

2,0

w

1,1

w

2,1

w

1,2

w

2,2

The weight table of the problem

21 4 531234567

The weight table of the neural network

0 0 0 0 00 0 0 0 00 0 0 0 0

0 00 00 00 0

x

5,7

w

5,7

x

5,6

w

5,6

......

x

1,1

w

1,1

x

1,2

w

1,2

x

i

,

j

w

i

,

j

w

0,3

w

1,3

w

2,3

w

0,2

w

1,2

w

2,2

w

0,1

w

1,1

w

2,1

w

0,0

w

1,0

w

2,0

(a)

(b)

Sn

(0)

f

(Sn

(0)

)

......

WT WT WT WT WT WT WT WT WT WT WT. . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .(1) (2) (3) (

i

)

n

N

WTdist.im.

n n–

1

n–

1

n–

1

n–

1

(

n–

1)(

N–

1)

+ 1

WT

1

WT

2

WT

n

NN

. . .

.

. . .

.

12

n

Separation of the best weight table

Fig. 7. (a) Nullification of inactive connections in a neuron; (b) the linear search block diagram for templatesin the network.

it suffices to satisfy the condition (14). Under the latter, all weights in WTdist.im. possess onevalue wr exceeding the maximal weight wmax defined from the set of all weights wij,k in real weighttables:

wij,k ≤ wmax ≤ wr. (14)

The value of wr can be given by

wr = Lwmax, (15)

where L ∈ [1,∞] denotes the distance rate of the “distant image.” The following seems easy toestablish for neuron r in layer 2 of the neural network, corresponding to the “distant image.”

According to (14) and (15), this neuron outputs only zero values of the function f(Sn(2)r ). To

demonstrate this fact, perform the following steps. First, address formula (12) to estimate the

value Sn(1)k,r of a layer 1 neuron, which compares the template with the one of the “distant image:”

Sn(1)k,r =

M∑

i=1

K∑

j=1

xijwij,k −M∑

i=1

K∑

j=1

xijwr ≤M∑

i=1

K∑

j=1

xijwmax −M∑

i=1

K∑

j=1

xijwr

=M∑

i=1

K∑

j=1

(xijwmax − xijwr) =M∑

i=1

K∑

j=1

xij(wmax −wr) =M∑

i=1

K∑

j=1

xij(wmax − Lwmax) < 0.

AUTOMATION AND REMOTE CONTROL Vol. 74 No. 9 2013

Page 10: Multitasking application of neural networks implementing metric methods of recognition

MULTITASKING APPLICATION OF NEURAL NETWORKS 1483

1

2

3

0 0 0 0 01 1 1 0 04 2 0 0 12 1 0 1 21 0 2 1 40 0 1 2 50 1 2 5 8

0 0 0 0 01 1 10 021 1 0 0

4 1 0 02 1 0 1 21

WT1 for “7” WT2 for “2” WTdist.im.

Tested symbol

0 0 1 10 0 0 0 0

16 16 16 16 1616 16 16 16 1616 16 16 16 1616 16 16 16 1616 16 16 16 1616 16 16 16 1616 16 16 16 16

x

1,1

w

1,1

x

1,2

w

1,2

x

5,7

w

5,7

x

i,j

w

i,j.... ....

x

1,1

w

1,1

x

1,2

w

1,2

x

5,7

w

5,7

x

i,j

w

i,j

x

1,1

w

1,1

x

1,2

w

1,2

x

5,7

w

5,7

x

i,j

w

i,j....

1

2

3

1, 2

1, 3

2, 3

0

y

out

= {2}

y

out

= {7}

layer 0 layer 1 layer 2

Fig. 8. The recognition problem for two graphic images using a neural network with three templates.

And so, by virtue of (7), the activation function f(Sn(1)k,r) of this neuron equals 1. The value Sn

(2)r

for a layer 2 neuron, which corresponds to the “distant image,” follows from (9) (under α = 1):

Sn(2)r =

r−1∑

i=1

f̄(Sn(1)i,r ) +

N∑

i=r+1

f(Sn(1)r,i )

r−1,(i�=k)∑

i=1

f̄(Sn(1)i,r ) + f̄(Sn

(1)k,r) +

N∑

i=r+1

f(Sn(1)r,i )

=

r−1,(i�=k)∑

i=1

f̄(Sn(1)i,r ) + 0 +

N∑

i=r+1

f(Sn(1)r,i ) < N − 1.

Here r is the number of a neuron corresponding to the “distant image” in layer 0; N indicatesthe number of images (templates) in the neural network. Therefore, formula (8) yields the output

value of a layer 2 neuron for the “distant image:” f(Sn(2)r ) = 0.

Figure 8 demonstrates elementary identification of the tested symbol “2” in recognition of twotemplates (“7” and “2”) using a neural network with three templates. The weight tables for thetemplates “2” and “7” (WT1 and WT2, respectively) are determined by the expression (5). Inactiveneuron 3 in layer 0 receives the weight table of the “distant image” (WTdist.im.); the correspondingvalues of weights (wr = 16) result from formula (15), where wmax = 8 is defined by WT1 (see Fig. 8)for the template “7,” and L = 2. The table below combines the values of weights and activationfunctions for all neurons in Fig. 8.

(3) If the number of templates involved by a problem exceeds the number of layer 0 neurons, it ispossible to apply the sequential search algorithm for templates (see the block diagram in Fig. 7b).

Under the condition n < Ntemp, where n designates the number of layer 0 neurons and Ntemp

stands for the number of used templates, the weight tables (WTs) of first n templates are suppliedto the neural network (NN). By feedback (see Fig. 7b), the output results of the neural network

Table

Layer 0 Layer 1 Layer 2yout

k Sn(0)k f(Sn

(0)k ) k,m Sn

(1)k,m f(Sn

(1)k,m) k Sn

(2)k f(Sn

(2)k )

1 24 24 1, 2 20 0 1 0 + 1 = 1 0 –2 4 4 1, 3 –248 1 2 0̄ + 1 = 2 1 {2}3 272 272 2, 3 –268 1 3 1̄ + 1̄ = 0 0

AUTOMATION AND REMOTE CONTROL Vol. 74 No. 9 2013

Page 11: Multitasking application of neural networks implementing metric methods of recognition

1484 GEIDAROV

separate WTi—weight table i of the closest template; the weights from this table are again trans-ferred to the inputs of neuron i in layer 0. And the remaining neurons in layer 0 (their numberequals n − 1) receive the weight tables of next (n − 1) templates. The procedure repeats untilall weight tables of Ntemp templates are completely exploited. Imagine that, at a certain stage ofthe sequential search algorithm (e.g., the last one, see Fig. 7b), some layer 0 neurons still remainunused. Then these neurons receive the weight table of the distant image (WTdist.im.) defined forthis problem.

(4) The block diagram in Fig. 6a proceeds from the following assumption. For each recognizedobject, we a priori know the recognition problem. Consider the case when the network in Fig. 6areceives an object with known belonging to a set of problems (but unknown belonging to a specificproblem). Consequently, one can apply the sequential search algorithm to templates of the network(Fig. 7b), representing all sets of classifiers as a uniform classifier. Such approach requires nopreliminary definition of a problem (however, it increases the duration of computations in orderto account all existing sets of weight tables). This process can be accelerated by preliminaryseparation of problem’s classifier; subsequently, one should expand the network (NN) with anadditional problem (a classifier), see Fig. 6b. To define this classifier, each problem is expressedas a separate image (a class), and a template or templates are specified for this class (e.g., by thesearch algorithm [4]). The templates of the above class serve for creating the weight tables (WT)of classes and the weight table of the “distant image” (WTdist.im.) for the given problem. Therecognition procedure of the input element runs according to the block diagram in Fig. 6b. Atstep 1, a necessary class is defined, which corresponds to a recognized object (Fig. 6b, arrow (1)).Later on, the weight tables of classes (WTs) are supplied to the inputs of the network (NN), andthe values from the weight table of the “distant image” (WTdist.im.) are supplied to unused neuronsin layer 0. The number of an active output of the neural network (defining the number of a probleman object belongs to) activates the classifier of the corresponding problem via the feedback (seeFig. 6b, arrow (2)). The weight tables of the separated problem enter the inputs of the neuralnetwork, and the output set of templates Yout for this problem goes to the output of the neuralnetwork (Fig. 6b, arrows (3)). Finally, object recognition runs within the separated problem.

6. CONCLUSION

Therefore, the multitasking application of neural networks based on metric methods of recog-nition calls for performing the following procedure. For each problem, define a priori the sets oftemplates (e.g., using the template selection algorithm developed in [4]). For each template, findthe values of the weight table by the closeness measure adopted in the metric method of recognition.Construct the neural network on the basis of the problem with the maximal number of templates(see the block diagram in Fig. 4); specify the number of layer 0 neurons by formulas (10) and (11).If the number of weights in some weight table is smaller than the number of connections in a layer 0neuron, nullify the weight values of inactive connections. If the number of weights in some weighttable is greater than the number of connections in a layer 0 neuron, reduce the dimensions of theweight table (either by uniform elimination of rows and columns or by enlarging the partition inter-val of the template image and recomputation of weight values). If the number of templates appliedto a problem is smaller than the number of layer 0 neurons, create the weight table of the “distantimage” whose values are applied to all inactive neurons in layer 0. If the number of templatesapplied to a problem is greater than the number of layer 0 neurons, either decrease the number oftemplates (e.g., by decreasing the number of templates of one image) or apply the sequential searchalgorithm of templates to the network (see Fig. 7b). In this case, belonging of recognized objects toa specific problem being a priori unknown, choose between two alternatives. Either supplement theexisting set of problems with an additional separation problem for problem classifier (Fig. 6b), or

AUTOMATION AND REMOTE CONTROL Vol. 74 No. 9 2013

Page 12: Multitasking application of neural networks implementing metric methods of recognition

MULTITASKING APPLICATION OF NEURAL NETWORKS 1485

express the sets of classifiers of all problems as a uniform classifier and apply the sequential searchalgorithm of templates to this classifier (Fig. 7b).

REFERENCES

1. Mehra, P. and Wah, B.W., Artificial Neural Networks: Concepts and Theory, Los Alamitos: IEEEComput. Soc. Press, 1992.

2. Golovko, V.L., Neurocomputers and Their Application, vol. 4: Neironnye seti: obuchenie, organizatsiya ipremenenie (Neural Networks: Learning, Organization and Application), Galushkin, A.I., Ed., Moscow:Izd. Predpr. Redakts. Zh. Radiotekh., 2001.

3. Wasserman, P.D., Neural Computing: Theory and Practice, New York: Van Nostrand Reinhold, 1989.Translated under the title Neirokomp’yuternaya tekhnika. Teoriya i praktika, Moscow: Mir, 1992.

4. Geidarov, P.S., Neural Networks on the Basis of the Sample Method, Automatic Control Comput. Sci.,2009, vol. 43, no. 4, pp. 203–210.

5. Birger, I.A., Tekhnicheskaya diagnostika (Technical Diagnostics), Moscow: Mashinostroenie, 1978.

6. Mirkes, E.M., Neirokomp’yuter. Proekt standarta (Neurocomputer: Draft Standard), Novosibirsk: Nauka,1998.

7. Boikov, I.V., Stability of Hopfield Neural Networks, Autom. Remote Control, 2003, vol. 64, no. 9,pp. 1474–1487.

This paper was recommended for publication by A.V. Bernshtein, a member of the EditorialBoard

AUTOMATION AND REMOTE CONTROL Vol. 74 No. 9 2013