7
RADIOENGINEERING, VOL. 21, NO. 4, DECEMBER 2012 1171 Solution of Linear Programming Problems using a Neural Network with Non-Linear Feedback Syed Atiqur RAHMAN, Mohd. Samar ANSARI, Athar Ali MOINUDDIN Department of Electronics Engineering, A.M.U., Aligarh, India [email protected], [email protected], [email protected] Abstract. This paper presents a recurrent neural circuit for solving linear programming problems. The objective is to minimize a linear cost function subject to linear constraints. The proposed circuit employs non-linear feedback, in the form of unipolar comparators, to introduce transcendental terms in the energy function ensuring fast convergence to the solution. The proof of validity of the energy function is also provided. The hardware complexity of the proposed circuit compares favorably with other proposed circuits for the same task. PSPICE simulation results are presented for a chosen optimization problem and are found to agree with the algebraic solution. Hardware test results for a 2-variable problem further serve to strengthen the proposed theory. Keywords Linear programming, dynamical systems, neural net- works, feedback systems, non-linear feedback. 1. Introduction Mathematical programming, in general, is concerned with the determination of a minimum or a maximum of a function of several variables, which are required to satisfy a number of constraints. Such solutions are sought in di- verse fields including engineering, operations research, man- agement science, computer science, numerical analysis, and economics [1], [2]. A general mathematical programming problem can be stated as [2]: Minimize f (x) (1) subject to g i (x) 0 (i = 1, 2,..., m) , (2) h j (x)= 0 ( j = 1, 2,..., p) , (3) x S (4) where x =(x 1 , x 2 ,..., x n ) T is the vector of unknown decision variables, and f , g i (i = 1, 2,..., m), h j ( j = 1, 2,..., p) are the real-valued functions of the n real variables x 1 , x 2 ,..., x n . In this formulation, the function f is called the ob- jective function, and inequalities (2), equations (3) and the set restrictions (4) are referred to as the constraints. It may be mentioned that although the mathematical programming problem (MPP) has been stated as a minimization problem in the description above, the same may readily be converted into a maximization problem without any loss of generality, by using the identity given in (5) max f (x)= -min [- f (x)]. (5) As a special case, if all the functions appearing in the MPP are linear in the decision variables x, the problem is re- ferred to as a linear programming problem (LPP). Such LPPs have been investigated extensively over the past decades, in view of their fundamental roles arising in a wide vari- ety of engineering and scientific applications, such as pat- tern recognition [3], signal processing [4], human move- ment analysis [5], robotic control [6], and data regression [7]. Other real life applications include portfolio optimiza- tion [8], crew scheduling [9], manufacturing and transporta- tion [10], telecommunications [11], and the Traveling Sales- man Problem (TSP) [12]. Traditional methods for solving linear programming problems typically involve an iterative process, but long computational time limits their usage. An alternative ap- proach to solution of this problem is to exploit the artifi- cial neural networks (ANN’s) which can be considered as an analog computer relying on a highly simplified model of neurons [13]. ANN’s have been applied to several classes of constrained optimization problems and have shown promise for solving such problems more effectively. For example, the Hopfield neural network has proven to be a powerful tool for solving some of the optimization problems. Tank and Hopfield first proposed a neural network for solving math- ematical programming problems, where a linear program- ming problem (LPP) was mapped into a closed-loop network [14]. A brief overview of the various neural network based approaches which have been proposed over the past is pre- sented in the next section. In this paper, a hardware solution to the linear pro- gramming problem is presented. The proposed architecture uses non-linear feedback which leads to a new energy func- tion that involves transcendental terms. This transcenden- tal energy function is fundamentally different from the stan- dard quadratic form associated with Hopfield network and its

Solution of Linear Programming Problems using a Neural ...Solution of Linear Programming Problems using a Neural Network with Non-Linear Feedback Syed Atiqur RAHMAN, Mohd. Samar ANSARI,

  • Upload
    others

  • View
    47

  • Download
    0

Embed Size (px)

Citation preview

RADIOENGINEERING, VOL. 21, NO. 4, DECEMBER 2012 1171

Solution of Linear Programming Problems using a NeuralNetwork with Non-Linear Feedback

Syed Atiqur RAHMAN, Mohd. Samar ANSARI, Athar Ali MOINUDDIN

Department of Electronics Engineering, A.M.U., Aligarh, India

[email protected], [email protected], [email protected]

Abstract. This paper presents a recurrent neural circuitfor solving linear programming problems. The objective is tominimize a linear cost function subject to linear constraints.The proposed circuit employs non-linear feedback, in theform of unipolar comparators, to introduce transcendentalterms in the energy function ensuring fast convergence tothe solution. The proof of validity of the energy functionis also provided. The hardware complexity of the proposedcircuit compares favorably with other proposed circuits forthe same task. PSPICE simulation results are presented fora chosen optimization problem and are found to agree withthe algebraic solution. Hardware test results for a 2-variableproblem further serve to strengthen the proposed theory.

KeywordsLinear programming, dynamical systems, neural net-works, feedback systems, non-linear feedback.

1. IntroductionMathematical programming, in general, is concerned

with the determination of a minimum or a maximum ofa function of several variables, which are required to satisfya number of constraints. Such solutions are sought in di-verse fields including engineering, operations research, man-agement science, computer science, numerical analysis, andeconomics [1], [2].

A general mathematical programming problem can bestated as [2]:

Minimize f (x) (1)

subject togi (x)≥ 0 (i = 1,2, . . . ,m) , (2)h j (x) = 0 ( j = 1,2, . . . , p) , (3)

x ∈ S (4)

where x= (x1,x2, . . . ,xn)T is the vector of unknown decision

variables, and f , gi(i = 1,2, . . . ,m), h j( j = 1,2, . . . , p) arethe real-valued functions of the n real variables x1,x2, . . . ,xn.

In this formulation, the function f is called the ob-jective function, and inequalities (2), equations (3) and the

set restrictions (4) are referred to as the constraints. It maybe mentioned that although the mathematical programmingproblem (MPP) has been stated as a minimization problemin the description above, the same may readily be convertedinto a maximization problem without any loss of generality,by using the identity given in (5)

max f (x) = −min [− f (x)]. (5)

As a special case, if all the functions appearing in theMPP are linear in the decision variables x, the problem is re-ferred to as a linear programming problem (LPP). Such LPPshave been investigated extensively over the past decades,in view of their fundamental roles arising in a wide vari-ety of engineering and scientific applications, such as pat-tern recognition [3], signal processing [4], human move-ment analysis [5], robotic control [6], and data regression[7]. Other real life applications include portfolio optimiza-tion [8], crew scheduling [9], manufacturing and transporta-tion [10], telecommunications [11], and the Traveling Sales-man Problem (TSP) [12].

Traditional methods for solving linear programmingproblems typically involve an iterative process, but longcomputational time limits their usage. An alternative ap-proach to solution of this problem is to exploit the artifi-cial neural networks (ANN’s) which can be considered asan analog computer relying on a highly simplified model ofneurons [13]. ANN’s have been applied to several classes ofconstrained optimization problems and have shown promisefor solving such problems more effectively. For example, theHopfield neural network has proven to be a powerful toolfor solving some of the optimization problems. Tank andHopfield first proposed a neural network for solving math-ematical programming problems, where a linear program-ming problem (LPP) was mapped into a closed-loop network[14]. A brief overview of the various neural network basedapproaches which have been proposed over the past is pre-sented in the next section.

In this paper, a hardware solution to the linear pro-gramming problem is presented. The proposed architectureuses non-linear feedback which leads to a new energy func-tion that involves transcendental terms. This transcenden-tal energy function is fundamentally different from the stan-dard quadratic form associated with Hopfield network and its

1172 S. A. RAHMAN, M. S. ANSARI, A. A. MOINUDDIN, SOLUTION OF LINEAR PROGRAMMING PROBLEMS USING ...

variants. To solve a LPP in n variables with m constraints,the circuit requires n opamps, m unipolar comparators and(m+n) resistors thereby causing the hardware complexity ofthe proposed network to compare favorably with the existinghardware implementations. It may be mentioned that a sim-ilar approach of using non-linear synaptic interconnectionsbetween neurons has also been employed to solve systems ofsimultaneous linear equations [15] and quadratic program-ming [16]. An initial result on the solution of a linear pro-gramming problem in two variables using this approach ap-pears in [17].

The remainder of this paper is arranged as follows.A brief review of relevant technical literature on the solutionof LPP using neural network based methods is presented inSection 2. Section 3 outlines the mathematical formulationof the basic problem and details of the proposed network.Section 4 contains explanation of the energy function andthe proof of its validity. Section 5 contains the circuit im-plementation of the proposed network for a set of sampleproblem in four variables. PSPICE simulation results of theproposed circuit are also presented. Results of breadboardimplementation of the proposed circuit for a 2-variable prob-lem are contained in Section 6. Issues that are expected toarise in actual monolithic implementations are discussed inSection 7. Concluding remarks are presented in Section 8.

2. Existing Neural Networks for LPPLPP has received considerable research attention from

the neural networks community. The first solution of thelinear programming problem was proposed by Tank andHopfield wherein they used the continuous-time Hopfieldnetwork [14]. From the computational aspect, the opera-tion of Hopfield network for an optimization problem, likethe LPP, manages a dynamic system characterized by anenergy function, which is the combination of the objec-tive function and the constraints of the original problem[18]. Over the years, the penalty function approach hasbecome a popular technique for solving optimization prob-lems. Kennedy & Chua proposed an improved version ofTank & Hopfield’s network for LPP in which an inexactpenalty function was considered [19]. The requirement ofsetting a large number of parameters was a major drawbackof Kennedy & Chua’s LPP network [4]. Rodriguez-Vazquezet al. used a different penalty method to transform the givenLPP into an unconstrained optimization problem [20]. Al-though Rodriguez-Vazquez et al. later pointed out that theirnetwork had no equilibrium point in the classical sense [21],investigations by Lan et al. proved that the network canindeed converge to an optimal solution of the given problemfrom any arbitrary initial condition [22]. Maa & Shanblattemployed a two-phase neural network architecture for solv-ing LPPs [23]. Chong et al. analyzed a class of neuralnetwork models for the solution of LPPs by dynamic gra-dient approaches based on exact non-differentiable penaltyfunctions [24]. They also developed an analytical tool aimed

at helping the system converge to a solution within a finitetime. In an approach different from the penalty functionmethods, Zhu, Zhang and Constantinides proposed a La-grange method for solving LPPs through Hopfield networks[25]. Xia and Wang used bounded variables to constructa new neural network approach to solve LPP with no penaltyparameters. They suggested that the equilibrium point is thesame as the exact solution when the primal and dual prob-lems are solved simultaneously [26]. More recently, Malek& Yari proposed two new methods for solving the LPPand presented optimal solutions with efficient convergencewithin a finite time [27]. Lastly, Ghasabi-Oskoei, Malek andAhmadi have presented a recurrent neural network modelfor solving LPP based on a dynamical system using arbi-trary initial conditions. The method does not require analogmultipliers thereby reducing the system complexity [28].

3. Proposed CircuitLet the first-order function to be minimized be

F =[

c1 c2 . . . cn]

V1V2...

Vn

(6)

subject to the following linear constraintsa11 a12 . . . a1na21 a22 . . . a2n

......

. . ....

am1 am2 . . . amn

V1V2...

Vn

b1b2...

bm

(7)

where V1,V2, . . . ,Vn are the variables, and ai j, c j and bi(i = 1,2, . . . ,m; j = 1,2, . . . ,n) are constants. The proposedneural-network based circuit to minimize the function givenin (6) in accordance with the constraints of (7) is presentedin Fig. 1. As can be seen from Fig. 1, individual inequal-ities from the set of constraints are passed through non-linear synapses which are realized using unipolar compara-tors. The outputs of the comparators are fed to neuronshaving weighted inputs. The neurons are realized by usingopamps and the weights are implemented using resistances.Rpi and Cpi are the input resistance and capacitance of theopamp that is used to emulate the functionality of a neuron.These parasitic components are included to model the dy-namic nature of the opamp.

If the unipolar comparator is operated with a single+Vm supply, while the opamp realizing the neuronal func-tionality being biased with ± Vm, the obtained transfer char-acteristics of the unipolar voltage comparator are presentedin Fig. 2 and can be mathematically modelled by (8). As ex-plained in the next section, such unipolar comparator charac-teristics are utilized to obtain an energy function which actsto bring the neuronal states to the feasible region.

RADIOENGINEERING, VOL. 21, NO. 4, DECEMBER 2012 1173

Fig. 1. i–th neuron of the proposed feedback neural network cir-cuit to solve a linear programming problem in n variableswith m linear constraints.

Fig. 2. Transfer characteristics of the unipolar comparator.

x =12

Vm [ tanh β (Vi−Vj)+1] . (8)

Using (8), the output of the i-th unipolar comparator inFig. 1 can be given by (9) where β is the open-loop gain ofthe comparator (practically very high), ± Vm are the outputvoltage levels of the comparator and V1, V2, . . . , Vn are theneuron outputs.

xi =12

Vm [tanh β (ai1V1 +ai2V2 + . . .+ainVn−bi)+1] .(9)

Applying node equations for node ‘A’ in Fig. 1, the equationof motion of the i-th neuron can be given as

Cpidui

dt=

m

∑j=1

[x j

Rc ji

]+

ci

Ri− ui

Rieqv(10)

where Rieqv is the parallel equivalent of all resistances con-nected at node ‘A’ in Fig. 1 and is given by (11)

1Rieqv

=m

∑j=1

[1

Rc ji

]+

1Ri

+1

Rpi(11)

where ui is the internal state of the i-th neuron, Rc1i, Rc2i, . . . ,Rcmi are the weight resistances connecting the outputs of the

unipolar comparators to the input of the i-th neuron. As it isshown later in this section, the values of these resistances aregoverned by the entries in the coefficient matrix of (7). Re-sistance Ri causes terms corresponding to the linear functionto be minimized, to appear in the energy function.

It may be mentioned that the concept of a Lyapunovor energy function being associated with gradient-type neu-ral networks was first employed by Hopfield in the stabilityanalysis of the so-called Hopfield Neural Network (HNN)[14]. The computational energy for dynamical systems likethe HNN and the Non-Linear Synapse Neual Network pre-sented in this paper, decreases continuously in time with thenetwork converging to a minimum in state space. The evo-lution of the system is in the general direction of the nega-tive gradient of the energy function. Typically, the networkenergy function is made equivalent to a certain objectivefunction that needs to be minimized. The search for an en-ergy minimum performed by the network correspond to thesearch for the solution of an optimization problem.

As explained in Section 4, the energy function asso-ciated with the non-linear feedback neural circuit of Fig. 1,for minimizing a linear objective function subject to linearconstraints, is given by

E =n

∑i=1

ciVi +Vm

2

m

∑i=1

n

∑j=1

ai jVj

+Vm

m

∑i=1

ln cosh β

[n

∑j=1

(ai jVj−bi)

]. (12)

This expression of the energy function can be written ina slightly different (but more illuminating) form as

E =n

∑i=1

ciVi +(P1 +P2 + . . .+Pm) (13)

where the first term is the same as the first-order functionto be minimized, as given in (6), and P1, P2, . . . , Pm are thepenalty terms. The i-th penalty term can be given as

Pi =Vm

2

n

∑j=1

ai jVj +Vm

2βln cosh β

[n

∑j=1

(ai jVj−bi)

]. (14)

Obtaining a partial differential of the combined penalty term,P(= P1 +P2 + . . .+Pm) with respect to Vi we have

∂P∂Vi

=Vm

2

m

∑j=1

a ji +Vm

2

m

∑j=1

a ji tanh β

[n

∑j=1

(ai jVj−bi)

](15)

which may be simplified to

∂P∂Vi

=m

∑j=1

a jix j. (16)

Using the above relations to find the derivative of the energyfunction E with respect to Vi we have

1174 S. A. RAHMAN, M. S. ANSARI, A. A. MOINUDDIN, SOLUTION OF LINEAR PROGRAMMING PROBLEMS USING ...

∂E∂Vi

=∂

∂Vi

[n

∑i=1

ciVi

]+

∂P∂Vi

(17)

which in turn yields

∂E∂Vi

= ci +m

∑j=1

a jix j. (18)

Also, if E is the energy function, it must satisfy the followingcondition [29]:

∂E∂Vi

= KCpidui

dt(19)

where K is a constant of proportionality and has the dimen-sions of resistance. Using (9), (10) and (18) in (19) resultsin (for the i–th neuron)

Rc ji = K/a ji ; ( j = 1,2, . . . ,m). (20)

A similar comparison of the remaining partial fractions forthe remaining neurons yields the following:

Rc ji = K/a ji ; ( j = 1,2, . . . ,m),(i = 1,2, . . . ,n), (21)

Ri = K ; (i = 1,2, . . . ,n). (22)

4. Energy FunctionThis section deals with the explanation of individual

terms in the energy function expression given in (12). Thelast term is transcendental in nature and an indicative plotshowing the combined effect of the last two terms is pre-sented in Fig. 3. As can be seen, one ‘side’ of the energylandscape is flat whilst the other has a slope directed to bringthe system state towards the side of the flat slope. Duringthe actual operation of the proposed LPP solving circuit, thecomparators remain effective only when the neuronal out-put states remain outside the feasible region and during thiscondition, these unipolar comparators work to bring (and re-strict) the neuron output voltages to the feasible region. Oncethat is achieved, first term in (12) takes over and works tominimize the given function.

Fig. 3. Combined effect of last two terms in (12).

The validity of the energy function of (12) can be proved asfollows. The time derivative of the energy function is givenby

dEdt

=n

∑i=1

∂E∂Vi

dVi

dt=

n

∑i=1

∂E∂Vi

dVi

dui

dui

dt. (23)

Using (19) in (23) we get

dEdt

=n

∑i=1

KCpi

(dui

dt

)2 dVi

dui. (24)

The transfer characteristics of the output opamp used toimplement the neurons in Fig. 1 implements the activationfunction of the neuron and can be written as

Vi = f (ui) (25)

where Vi denotes the output of the opamp and ui correspondsto the internal state at the inverting terminal. The function fis typically a saturating, monotonically decreasing one, asshown in Fig. 4, and therefore [15],

Fig. 4. Transfer characteristics of the opamp used to realize theneurons.

dVi

dui≤ 0 (26)

thereby resulting indEdt≤ 0 (27)

with the equality being valid fordui

dt= 0. (28)

Equation (27) shows that the energy function can neverincrease with time which is one of the conditions for a validenergy function. The second criterion i.e. the energy func-tion must have a lower bound is also satisfied for the circuitof Fig. 1 wherein it may be seen that V1, V2, . . . , Vn are allbounded (as they are the outputs of opamps, as given in (25))amounting to E, as given in (12), having a finite lower bound.

5. Simulation ResultsThis section deals with the application of the proposed

network to the task of minimizing the objective functionV1 +2V2−V3 +3V4 (29)

subject to

V1−V2 +V3 ≤ 4,V1 +V2 +2V4 ≤ 6,V2−2V3 +V4 ≤ 2,−V1 +2V2 +V3 ≤ 2,

V1 ≥ 0,V4 ≥ 0. (30)

RADIOENGINEERING, VOL. 21, NO. 4, DECEMBER 2012 1175

Fig. 5. Obtaining unipolar comparator characteristics using anopamp and a diode.

Fig. 6. Transfer characteristics for opamp based unipolar andbipolar comparators as obtained from PSPICE simula-tions.

The values of resistances acting as the weights on the neu-rons are obtained from (21), (22). For the purpose of simula-tion, the value of K was chosen to be 1 kΩ. Using K = 1 kΩ

in (21), (22) gives

Rc11 Rc12 Rc13 Rc14Rc21 Rc22 Rc23 Rc24Rc31 Rc32 Rc33 Rc34Rc41 Rc42 Rc43 Rc44Rc51 Rc52 Rc53 Rc54Rc61 Rc62 Rc63 Rc64

=

1K −1K 1K ∞

1K 1K ∞ 0.5K∞ 1K −0.5K 1K−1K 0.5K 1K ∞

−1K ∞ ∞ ∞

∞ ∞ ∞ −1K

,(31)

R1 = R2 = R3 = R4 = 1K. (32)

For the purpose of PSPICE simulations, the unipo-lar voltage comparator was realized using a diode with anopamp based comparator as shown in Fig. 5. The transfercharacteristics obtained during the PSPICE simulations foropamp based bipolar and unipolar comparators are presentedin Fig. 6 from where it can be observed that the obtainedunipolar characterisitics are in agreement with the ideal char-acteristics of Fig. 2. For the purpose of this simulation, theLMC7101A CMOS opamp model from the Orcad library inPSPICE was utilised. The value of β for this opamp wasmeasured to be 1.1× 104 using PSPICE simulation. Forthe negative values arising in (31), inverted outputs wouldbe required from the unipolar voltage comparators. For thepresent simulation, inverting amplifiers were employed atthe outputs of the comparators which need negative weights.

Routine mathematical analysis of (29) yields: V1 = 0,V2 = −10, V3 = −6 and V4 = 0. The resultant plots ofthe neuron output voltages as obtained after PSPICE sim-ulation are presented in Fig. 7 from where it can be seenthat V(1) = 110 µV, V(2) = −10.28 V, V(3) = −6.17 V andV(4) = 110 µV which are very near to the algebraic solutionthereby confirming the validity of the approach. For em-ulating a more realistic ’power-up’ scenario, random initialvalues in the milli-volt range were assigned to node voltages.

Fig. 7. Simulation results for the proposed circuit applied tominimize (29) subject to (30).

One set of initial node voltage was: V(1) = 2 mV, V(2) =7 mV, V(3) = -5 mV and V(4) = 10 mV.

6. Hardware Test ResultsBreadboad implementation of the proposed circuit was

also carried out. Apart from verification of the working ofthe proposed circuit, the actual circuit realization also servedthe purpose of testing the convergence of the circuit to thesolution starting from different initial conditions. The noisepresent in any electronic circuit acts as a random initial con-dition for the convergence of the neural circuit. Standard lab-oratory components i.e. the µA741 opamp and resistanceswere used for the purpose. A 2–variable problem for theminimization of the objective function

2V1 +6V2 (33)

subject to

V1 ≥ 1,V2 ≥ 1 (34)

was chosen for the hardware tests. The values of resistancesacting as the weights on the neurons are obtained from (21),(22). The value of K was chosen to be 1 kΩ. Using K = 1 kΩ

in (21), (22) gives R1 = R2 = 1 kΩ, Rc11 = Rc22 = 1 kΩ andRc12 = Rc21 = ∞. The voltages applied were b1 = b2 = 1 V,c1 = 2 V and c2 = 6 V. The obtained values of the neuronalvoltages were V1 = 1.06 V and V2 = 1.02 V which are in closeagreement with the exact mathematical solution which is V1= 1 and V2 = 1. a snapshot of the obtained results is presentedin Fig. 8.

7. Issues in Actual ImplementationThis section deals with the monolithic implementation

issues of the proposed circuit. The PSPICE simulations as-sumed that all operational amplifiers (and diodes) are identi-cal, and therefore, it is required to determine how deviationsfrom this assumption affect the performance of the network.

1176 S. A. RAHMAN, M. S. ANSARI, A. A. MOINUDDIN, SOLUTION OF LINEAR PROGRAMMING PROBLEMS USING ...

Fig. 8. Hardware test result for the proposed circuit applied tominimize (33) subject to (34).

A 10 % tolerance with Gaussian deviation profile was put onthe resistances used in the circuit to solve (29). The anal-ysis was carried out for 200 runs and the Mean Deviationwas found out to be -21.23×10−06 and Mean Sigma (Stan-dard Deviation) was 0.013. Offset analysis was also carriedout by incorporating random offset voltages (in the range of1 mV to 10 mV) to the opamps. The Mean Deviation inthis case was measured to be -20.19×10−06 and the MeanSigma (Standard Deviation) was 0.007. As can be seen, theeffects of mismatches and offsets on the overall precision ofthe final results are in an acceptable range.

Effects of non-idealities in various components werefurther investigated in PSPICE by testing the circuit with allresistances having the same percentage deviation from theirassigned values. The resulting assessment of the quality ofsolution is presented in Tab. 1 from where it is evident thatthe solution point does not change much even for high devi-ations in the resistance values.

Next, the effect of offset voltages in the opamp-basedunipolar comparators was explored. Offset voltages wereapplied at the inverting terminals of the comparators and theresults of PSPICE simulations for the chosen LPP are givenin Tab. 2. As can be seen, the offset voltages of the com-parators do not affect the obtained solutions to any appre-ciable extent. However, the error does tend to increase withincreasing offset voltages.

Finally, offset voltages for the opamps emulating theneurons were also considered. Offset voltages were appliedat the non-inverting inputs of the opamps and the results ofPSPICE simulations were compared with the algebraic so-lution as given in Tab. 3. As can be seen, the offset voltagesof the opamps have little effect on the obtained solutions.

In fact, the realization of unipolar comparators by theuse of opamps and diodes in the proposed circuit tends toincrease the circuit complexity. The transistor count canbe further reduced by utilising voltage-mode unipolar com-parators instead of the opamp-diode combination. This alsosuggests that a real, large scale implementation for solv-ing linear programming problems with high variable counts

Tab. 1. Effect of variation in resistances on the obtained results.

Tab. 2. Effect of offset voltages of the opamp-based unipolarcomparators on the solution quality.

Tab. 3. Effect of offset voltages of the opamp-based neurons onthe solution quality.

might be quite different. Alternative realizations based onthe differential equations (10) governing the system of neu-rons are being investigated. Other approaches to obtain thetanh(.) non-linearity include the use of a MOSFET operatedin the sub-threshold region [30] and the use of Current Dif-ferencing Transconductance Amplifier (CDTA) to providethe same nonlinearity in the current-mode regime [31].

8. ConclusionIn this paper, a CMOS compatible approach to solve

a linear programming problem in n variables subject to mlinear constraints, which uses n-neurons and m-synapsesis presented. Each neuron requires one opamp and eachsynapse is implemented using a unipolar voltage-mode com-parator. This results in significant reduction in hardware overthe existing schemes. The proposed network was tested ona sample problem of minimizing a linear function in 4 vari-ables and the simulation results confirm the validity of theapproach. Hardware verification for a 2–variable problemfurther validated the theory proposed.

References

[1] KREYSZIG, E. Advanced Engineering Mathematics. 8th ed. NewDelhi (India): Wiley-India, 2006.

[2] KAMBO, N. S. Mathematical Programming Techniques. New Delhi(India): Affilated East-West Press Pvt Ltd., 1991.

RADIOENGINEERING, VOL. 21, NO. 4, DECEMBER 2012 1177

[3] ANGUITA, D., BONI, A., RIDELLA, S. A digital architecture forsupport vector machines: Theory, algorithm, and FPGA implementa-tion. IEEE Transactions on Neural Networks, 2003, vol. 14, no. 5,p. 993 - 1009.

[4] CICHOCKI, A., UNBEHAUEN, R. Neural Networks for Optimiza-tion and Signal Processing. Chichester (UK): Wiley, 1993.

[5] IQBAL, K., PAI, Y. C. Predicted region of stability for balance re-covery: motion at the knee joint can improve termination of forwardmovement. Journal of Biomechanics, 2000, vol. 13, no. 12, p. 1619 -1627.

[6] ZHANG, Y. Towards piecewise-linear primal neural networks for op-timization and redundant robotics. In IEEE International Conferenceon Networking, Sensing and Control. Fort Lauderdale (FL, USA),2006, p. 374 - 379.

[7] ZHANG, Y., LEITHEAD, W. E. Exploiting Hessian matrix and trust-region algorithm in hyperparameters estimation of Gaussian process.Applied Mathematics and Computation, 2005, vol. 171, no. 2, p. 1264- 1281.

[8] YOUNG, M. R. A minimax portfolio selection rule with linear pro-gramming solution. Management Science, 1988, vol. 44, no. 5, p. 673- 683.

[9] BIXBY, R. E., GREGORY J. W., LUSTIG, I. J., MATSTEN, R. E.,SHANNO, D. F. Very large-scale linear programming: A case study incombining interior point and simplex methods. Operations Research,1992, vol. 40, no. 5, p. 885 - 897.

[10] PEIDRO, D., MULA, J., JIMENEZ, M., del MAR BOTELLA, M.A fuzzy linear programming based approach for tactical supply chainplanning in an uncertainty environment. European Journal of Opera-tional Research, 2010, vol. 205, no. 16, p. 65 - 80.

[11] CHERTKOV, M., STEPANOV, M. G. An efficient pseudocodewordsearch algorithm for linear programming decoding of LDPC codes.IEEE Transactions on Information Theory, 2008, vol. 54, no. 4,p. 1514 - 1520.

[12] CHVATAL, V., COOK, W., DANTZIG, G. B., FULKERSON, D. R.,JOHNSON, S. M. Solution of a Large-Scale Traveling-SalesmanProblem. Berlin (Germany): Springer, 2010, p. 7 - 28.

[13] KROGH, A. What are artificial neural networks? Nature Biotechnol-ogy, 2008, vol. 26, no. 2, p. 195 - 197.

[14] TANK, D. W., HOPFIELD, J. J. Simple ‘neural’ optimization net-works: An A/D converter, signal decision circuit, and a linear pro-gramming circuit. IEEE Transactions on Circuits and Systems, 1986,vol. 33, no. 5, p. 533 - 541.

[15] RAHMAN, S. A., ANSARI, M. S. A neural circuit with transcen-dental energy function for solving system of linear equations. AnalogIntegrated Circuits and Signal Processing, 2011, vol. 66, no. 3, p. 433- 440.

[16] ANSARI, M. S., RAHMAN, S. A. A non-linear feedback neural net-work for solution of quadratic programming problems. InternationalJournal of Computer Applications, 2012, vol. 39, no. 2, p. 44 - 48.

[17] ANSARI, M. S., RAHMAN, S. A. A DVCC-based non-linear ana-log circuit for solving linear programming problems. In InternationalConference on Power, Control and Embedded Systems (ICPCES). Al-lahabad (India), 2010, p. 1 - 4.

[18] WEN, U.-P., LAN, K.-M., SHIH, H.-S. A review of Hopfield neuralnetworks for solving mathematical programming problems. EuropeanJournal of Operational Research, 2009, vol. 198, no. 3, p. 675 - 687.

[19] KENNEDY, M. P., CHUA, L. O. Neural networks for nonlinear pro-gramming. IEEE Transactions on Circuits and Systems, 1988, vol. 35,no. 5, p. 554 - 562.

[20] RODRIGUEZ-VAZQUEZ, A., RUEDA, A., HUERTAS, J. L.,DOMINGUEZ-CASTRO, R. Switched-capacitor neural networks forlinear programming. Electronics Letters, 1988, vol. 24, no. 8, p. 496- 498.

[21] RODRIGUEZ-VAZQUEZ, A., DOMINGUEZ-CASTRO, R.,RUEDA, A., HUERTAS, J. L., SANCHEZ-SENENCIO, E. Nonlin-ear switched capacitor ‘neural’ networks for optimization problems.IEEE Transactions on Circuits and Systems, 1990, vol. 37, no. 3,p. 384 - 398.

[22] LAN, K.-M., WEN, U.-P., SHIH, H.-S., LEE, E. S. A hybrid neuralnetwork approach to bilevel programming problems. Applied Mathe-matics Letters, 2007, vol. 20, no. 8, p. 880 - 884.

[23] MAA, C.-Y., SHANBLATT, M. A. Linear and quadratic program-ming neural network analysis. IEEE Transactions on Neural Net-works, 1992, vol. 3, no. 4, p. 580 - 594.

[24] CHONG, E. K. P., HUI, S., ZAK, S. H. An analysis of a class of neu-ral networks for solving linear programming problems. IEEE Trans-actions on Automatic Control, 1999, vol. 44, no. 11, p. 1995 - 2006.

[25] ZHU, X., ZHANG, S., CONSTANTINIDES, A. G. Lagrange neuralnetworks for linear programming. Journal of Parallel and DistributedComputing, 1992, vol. 14, no. 3, p. 354 - 360.

[26] XIA, Y., WANG, J. Neural network for solving linear programmingproblems with bounded variables. IEEE Transactions on Neural Net-works, 1995, vol. 6, no. 2, p. 515 - 519.

[27] MALEK, A., YARI, A. Primal-dual solution for the linear program-ming problems using neural networks. Applied Mathematics andComputation, 2005, vol. 167, no. 1, p. 198 - 211.

[28] GHASABI-OSKOEI, H., MALEK, A., AHMADI, A. Novel artifi-cial neural network with simulation aspects for solving linear andquadratic programming problems. Computers & Mathematics withApplications, 2007, vol. 53, no. 9, p. 1439 - 1454.

[29] RAHMAN, S. A., JAYADEVA, DUTTA ROY, S. C. Neural net-work approach to graph colouring. Electronics Letters, 1999, vol. 35,no. 14, p. 1173 - 1175.

[30] NEWCOMB, R. W., LOHN, J. D. Analog VLSI for neural networks.The Handbook of Brain Theory and Neural Networks, p. 86 - 90. Cam-bridge (MA, USA): MIT Press, 1998.

[31] ANSARI, M. S., RAHMAN, S. A. A novel current-mode non-linearfeedback neural circuit for solving linear equations. In InternationalConference on Multimedia, Signal Processing and CommunicationTechnologies. Aligarh (India), 2009, p. 284 - 287.

About Authors. . .

Syed Atiqur RAHMAN received the B.Sc.(Eng.) in Electrical Engineer-ing in 1988, and M.Sc.(Eng.) in Electronics and Communication in 1994,from Aligarh Muslim University, Aligarh. He was appointed as Lecturer inthe Department of Electronics Engineering at AMU, Aligarh in 1988. Hejoined as Reader in Computer Engineering and then Electronics Engineer-ing in the same University in 1997. He earned his PhD from IIT Delhi,India, in 2007 and is currently working as an Associate Professor in theDepartment of Electronics Engineering at AMU. His fields of interest areelectronic circuits, logic theory and artificial neural networks.

Mohd. Samar ANSARI received B.Tech. degree in Electronics Engi-neering from the Aligarh Muslim University, Aligarh, India, in 2001 andM.Tech. degree in Electronics Engineering, with specialization in Elec-tronic Circuit and System Design, in 2007. He is currently an AssistantProfessor in the Department of Electronics Engineering, Aligarh MuslimUniversity, where he teaches Electronic Devices & Circuits and Microelec-tronics. His research interests include microelectronics, analog signal pro-cesssing and neural networks. He has published around 50 research papersin reputed international journals & conferences and authored 3 book chap-ters.

Athar Ali MOINUDDIN received the B.Sc.(Eng.) in Electronics Engi-neering, M.Sc.(Eng.) in Communication & Information Sytems and Ph.D.from Aligarh Muslim University, Aligarh. He was appointed as AssociateProfessor in Electronics Engineering Department in the same University in2009. His fields of interest are communications systems and artificial neuralnetworks.