26
A Globally Convergent Sequential Linear Programming Algorithm for Mathematical Programs with Linear Complementarity Constraints 1 Jean Bosco Etoa Etoa 2 1 This work was completed during a PhD thesis research of the first author (Ref. 9). 2 Professeur Chargé de cours, Département des Sciences Économiques et de Gestion, Université de Yaoundé II BP 15 Soa Cameroun ([email protected], jbetoa_etoa @hotmail.com). Curent address : Cultural Counselor, Cameroon High Commission, 170 Clemow Avenue, Ottawa (ON) K1S 2B4 Canada.

A Globally Convergent Sequential Linear Programming

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Globally Convergent Sequential Linear Programming

A Globally Convergent Sequential Linear Programming Algorithm for Mathematical Programs with Linear Complementarity Constraints1

Jean Bosco Etoa Etoa2

1 This work was completed during a PhD thesis research of the first author (Ref. 9). 2 Professeur Chargé de cours, Département des Sciences Économiques et de Gestion, Université de Yaoundé II BP 15 Soa Cameroun ([email protected], jbetoa_etoa @hotmail.com). Curent address : Cultural Counselor, Cameroon High Commission, 170 Clemow Avenue, Ottawa (ON) K1S 2B4 Canada.

Page 2: A Globally Convergent Sequential Linear Programming

2

2

Abstract: This paper presents a sequential linear programming algorithm for computing a stationary point of a mathematical program with linear equilibrium constraints. The algorithm is based on a formulation of equilibrium constraints as a system of semismooth equations by means of a perturbed Fisher-Burmeister functional. Using only data of the problem, we introduce a method to update the parameter that characterizes the aforesaid perturbed functional. Some computational results are reported. Keywords: mathematical program with linear equilibrium constraints, linear complementarity problem, perturbed Fisher-Burmeister functional, sequential linear programming algorithm. 1. Introduction

A mathematical program with equilibrium constraint (MPEC), is an optimization problem

whose constraints include variational inequalities or complementarity system parametrized by a design variable. Extensive bibliography is available in [6, 7, 12, 14, 21], while some interesting applications can be found in [13, 29, 33, 36]. There exist few algorithmic approaches proposed for computing stationary points of MPECs. These includes tree approaches described by Luo, Pang and Ralph [29] which are: a penalty interior point approach (PIPA), an implicit programming approach, and a piecewise sequential quadratic programming approach (PSQP). Fukushima, Luo and Pang [20] proposed a sequential quadratic programming, that shares common feature with the PIPA approach. Zhang and Liu [42] introduced an extreme point technique to PSQP algorithms to solve linear MPECs. Based on branch-and-bound techniques, Thaoi, Yamamoto, and Yoshie [38] proposed a global optimization method to solve MPECs. A projected-gradient algorithm, including a complementarity method has recently been proposed by Figueiredo et al. (Ref. 15) to solve a MPEC, that can be reduced to a nonlinear programming problem. Some algorithms based on nonsmooth reformulations of MPECs or relative problems have been proposed (see [25, 26, 27, 28, 35]). The proposal in [11, 20] shares related ideas, but differs in several respects from our present work. Bilevel programming problem (BPP) is a special case of MPEC; some variables are restricted to be in the solution set of a parametric convex optimization problem (see [30, 31]). Comprehensive overview of the historical development of BPP can be found in [1, 3, 7, 41].

We are interested in a sequential linear programming algorithm (SLP) for solving the MPEC problem with linear complementarity constraints, which is known to be a very difficult problem, being nonsmooth and nonconvex also under very favorable assumptions. In the SLP algorithm, the parametric Fisher-Burmeister functional is used to reformulate the complementarity conditions. We introduce a new method to update the aforesaid parameter. We then solve a sequence of smooth locally regular problems which progressively approximate the nonsmooth initial problem. We prove that the sequence of points so generated converge to a stationary point of the MPEC problem.

The rest of the paper is organized as follows: in the next section, we formally define the problem treated in this paper in terms of the perturbed Fisher-Burmeister functional. In section 4, while referring to the parameterized Fisher-Burmeister functional, we show that the aforesaid parameter can be updated while using the data of the problem exclusively. In section 5, we present the SLP algorithm and establish a global convergence theorem. We show that a class of MPEC problems can be transform into bilevel programming problems, and we show that the SLP

Page 3: A Globally Convergent Sequential Linear Programming

3

3

algorithm constitutes a good tool to solve linear complementarity problems. In section 5, we report some computational results carrying on both MPECs and generalized linear complementarity problems, with MALAB implementation.

2. Problem definition and preliminaries

Let's consider the following mathematical programming problem with parametric linear complementarity constraints:

1

,min ( , )

,. . 0 0,

( , , ) ,y y

x y

n n

f x y

w My Nx qs t w y

x y w X

⎧ = + +⎪

≤ ⊥ ≥⎨⎪ ∈ × ×⎩ R R

(2.1)

where 1 : yx nnf × →R R R is a continuously differentiable function, xnX +⊆ R , is a given polyhedral convex set, ynq∈R is a given vector, y yn nM ×∈R and y xn nN ×∈R are given matrices, and the notation w y⊥ means the vectors w and y are perpendicular to each other. We assume throughout the paper that M is a 0P matrix.

That is, all principal minors of M are nonnegative. We assume that the polyhedron X has the following representation as the solution set of a system of linear inequalities, i.e.,

{ }: ,xnX x Ax b+= ∈ ≤R (2.2)

where is xA m n× and b is in mR . Let 2( , , ) x yn nz x y w +≡ ∈R , and let 2 and ( , )x yn n T z+⊆ RF F denote the feasible region of (2.1) and the tangent cone of F at a vector z∈F respectively. A feasible point * ( *, *, *)z x y w≡ ∈F is said to be stationary point of (2.1) if

1( , , ) ( *, ) ( *, *) 0.dx

dx dy dw T z f x ydy⎛ ⎞

∈ ⇒∇ ≥⎜ ⎟⎝ ⎠

F (2.3)

Stationarity conditions involving constraints multipliers have been studied extensively for MPEC (see [18, 34]). Such conditions are in general complicated because of the disjunctive nature of the complementarity constraints. Let’s consider the following notation:

{ } { } { }* * * * * *0( *) : 0 ; ( *) : 0 and ( *) : 0 .i i i i i iI z i w y I z i w y I z i w y− += = < = = = = > =

Page 4: A Globally Convergent Sequential Linear Programming

4

4

Let * ( *, *, *)z x y w= ∈F . It is well-known that a vector ( , , )dx dy dw is an element of ( *, )T z F if and only if ( *, ),dx T x X∈ ,dw Ndx Mdy= + and

0

0, ( *)( , ) 0, ( )( ) 0, ( *) 0, ( *).

i

i i i i

i

dw i I zdw dy dw dy i I z

dy i I z

+

= ∈≥ = ∈

= ∈

On chapter 3 of their monograph, Luo, Pang and Ralph [29] derived an equivalent primal-dual description of stationarity, based on various partitions of the degenerate index set 0 ( *)I z . This set induces decompositions of problem (2.1) into a finite family of smooth nonlinear programs. If one considers a special case where the point * ( *, *, *)z x y w= is nondegenerate, the stationarity conditions become simple. In general, a triplet ( , , )x y w is said to be nondegenerate if the index set 0 ( *)I z is empty, i.e., ( *, *) (0,0), 1,2,...,i i yy w i n≠ = .

Let * ( *, *, *)z x y w= ∈F be nondegenerate. The corollary 5.1.3 in [29] states that, *z is a

stationary point of (2.1) if and only if there exist multipliers 2( *, *) , *yn mη ν λ +∈ ∈R R such that

1

1

( *, *) 0( *, *) * * * 0 * 0,

* 0 0

( *) ( * ) 0,y

t tx

ty

n

t

f x y N Af x y M Y

I W

Ax b

η ν λ

λ

⎛ ⎞ ⎛ ⎞∇ ⎛ ⎞⎛ ⎞⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟∇ + + + =⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟

⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟− ⎝ ⎠ ⎝ ⎠⎝ ⎠⎝ ⎠− =

(2.4)

where * and *W Y are diagonal matrices with diagonal entries * *

iw and ,iy 1,..., .yi n= The SLP algorithm proposed in this paper makes use of the functional ] ]2: 0 +Φ × ∞ →R R defined by:

] ]2 2 2( , , ) 2 for ( , , ) 0 a b a b a b a bµ µ µΦ = + − + + ∈ × ∞ . (2.5) This function is a perturbed version of the Fisher-Burmeister function (see [10, 16, 17]); it has been used successfully to solve complemetarity problems (see [23]), or MPEC problems by continuation methods (see [11, 20]). From lemma 2.2 in Kanzow [23], the function Φ has the NCP property, that is

( , , ) 0 0 , 0 et . a b a b abµ µΦ = ⇔ < < = (2.6)

When 0µ = , Φ reduces to the Fisher-Burmeister function (Ref. 8), that has been extensively used to solve complementarity and related problems (see [5, 8, 10, 16, 24, 40]). The function

Page 5: A Globally Convergent Sequential Linear Programming

5

5

2(.,., ) :µΦ →R R is everywhere differentiable for 0µ > , except at ( , ) (0,0)a b = . For any 2( , , )a b µ +∈ ×R R such that ( , , ) (0,0,0)a b µ ≠ , we have:

( , ) 2 2 2 2

( , , ) ( , , )( , , ) , 1 ,1 .2 2

a ba b a b a ba ba b a b a bµ µµ

µ µ

⎛ ⎞∂Φ ∂Φ⎛ ⎞ ⎜ ⎟∇ Φ ≡ = − −⎜ ⎟ ⎜ ⎟∂ ∂⎝ ⎠ + + + +⎝ ⎠ (2.7)

We consider the following perturbed problem associated with problem (2.1):

( , , , )MPEC M N q µ :

1, ,

2

min ( , )

,. . ( , , ) 0, 1,2,..., ,

( , , ) .y

x y w

i i y

n

f x y

w q Nx Mys t w y i n

x y w X

µ

+

⎧ = + +⎪⎪Φ = =⎨⎪

∈ ×⎪⎩ R

(2.8)

An additional assumption is formulated: Assumption (H1). Any feasible solution of the problem ( ( , , , )MPEC M N q µ ) is non degenerated,

i.e. ( * * 0 1,2,...,i i yw y i n+ ≠ ∀ = ).

The absence of assumption (H1) may compromise a qualification of the constraints

hypothesis for the problem ( ( , , , )MPEC M N q µ ). Let's consider the diagonal matrixes below

1 1

1

1 1

1

( , , )( , , )( ,..., )

( , , )( , , )( ,..., ).

y y

y

y y

y

n n kk kw

n

n n kk ky

n

w yw yD diagw w

w yw yD diagv v

µµ

µµ

∂Φ∂Φ=

∂ ∂

∂Φ∂Φ=

∂ ∂

It is easy to see that the diagonal matrixes et k kv wD D are regular according to assumption (H1).

To solve the perturbed problems (2.8), Facchinei, Jiang, and Qi [11] proposed to find a KKT point for the respectively problems (2.1) for a sequence of positive scalars { }kµ converging to zero. Thus their approach consists of solving a sequence of nonlinear programs each of which corresponds to a particular value kµ of the given sequence. This may be time consuming as any numerical procedure that requires repeated solution of non linear program as (2.8). The method proposed by Fukushima, Luo, and Pang [20] carries out one SQP iteration and penalty techniques in solving each of the nonlinear programming subproblems. Instead, our approach solves each of

Page 6: A Globally Convergent Sequential Linear Programming

6

6

such subproblems using a sequential linear programming iteration. The SLP algorithm is a locally descent procedure with a linear rate of convergence (see [9, 19, 37, 39]) that produces iterates converging to stationary points of the problem solved.

It is easy to show that problems ( ( , , , )MPEC M N q µ ) and ( ( , , )MPEC M N q ) are equivalent

when 0µ → . However, contrarily to ( ( , , )MPEC M N q ), problem ( , , , )MPEC M N q µ is a locally smooth classical optimization program. Let µF be the compact set of feasible solution of ( ( , , , )MPEC M N q µ ). For any value 0kµ ≥ , any sequence of feasible solutions of ( ( , , , )MPEC M N q µ ) has an accumulation point, and the sequence of programs ( ( , , , )MPEC M N q µ ) can be solved by any constrained mathematical programming software.

For a given value 0kµ ≥ , let kkz µ∈F ; the proposed algorithm for computing a stationary

point of problem (2.1) is an iterative scheme that generate a sequence of iterate by solving the following linearization formulation of problem (2.8):

( , )k

kLP z µ : , , ,max

0, ( 1) ( , , ),

. .

dx dy dw

k k k ky w k

Ndx Mdy dw CD dy D dw w y

s t

ξ ξ

µ

+ − =

+ = −Ψ ( 2)

, ( 3) ,

k

k

C

dy y Cdw w

− ≤

− ≤ ( 4) ( 5)

k

Cdx x C

Adx− ≤

1 1

2

( 6)( , ) ( , ) 0, ( 7)

( , , ) , 0,yx

k

k k t k k tx y

nn

b Ax Cf x y dx f x y dy C

dx dy dw

ξ

ξ

⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪ ≤ −⎪⎪∇ +∇ + ≤⎪⎪ ∈ × ≥⎩ R R

(2.9)

where the components of the vector ( , , )k k k

kw y µΨ = Ψ are given by ( , , ), 1,2,...,i i k yw y i nµΦ = . The solution of ( ( , )k

kLP z µ ) includes a descent direction ( , , )dz dx dy dw= relative to the objective function 1f , while imposing to this direction to be, at least closed locally to the boundary of the domain kµF of ( ( , , , )kMPEC M N q µ ). This is done through the constraint (C7) and the variable ξ . However, if the optimum value of the problem ( ( , )k

kLP z µ ) corresponds to * 0ξ > , then 1 1( , ) ( , ) 0k k t k k t

x yf x y dx f x y dy∇ +∇ < , and ( ),dx dy is a descent direction. The variation ( , )dx dy is a descent direction for 1f as long as * 0ξ > . The above formulation is closed to the one used in Minoux (1983) to solve a convex nonlinear programming problem. One can observe that solving the LP (2.9) is equivalent to solve the following LP program:

Page 7: A Globally Convergent Sequential Linear Programming

7

7

1 1, ,min ( , ) ( , )

0,

. .

k k t k k tdx dy dw x y

k ky w

f x y dx f x y dy

Ndx Mdy dwD dy D dw

s t

∇ +∇

+ − =

+ ( , , ),

, ,

k kk

k

k

w y

dy ydw w

dx

µ= −Ψ

− ≤

− ≤

2

( , , ) ,yx

k

k

nn

xAdx b Ax

dx dy dw

⎧⎪⎪⎪⎪⎪⎨⎪ ≤⎪⎪ ≤ −⎪⎪ ∈ ×⎩ R R

In the following result, we show that the solution of the linear programming problem

( ( , )kkLP z µ ) is unique when it exists.

Proposition 2.1. For a given 0kµ > , let kkz µ∈F for 1,2,...k = be a feasible solution of the subproblem problem ( ( , , , )kMPEC M N q µ ) such that M is a P0-matrix. Under assumption (H1), the problem ( ( , )k

kLP z µ ) has a unique solution when it exists. Proof. Let’s assume that ( , , , )dx dy dwξ is a solution of the problem ( ( , )k

kLP z µ ). As M is a P0-matrix, and k k

w yD D are regular matrices and, according to proposition 3.1 in [20], the matrix

ynkk ky w

M IQ

D D

−⎡ ⎤= ⎢ ⎥⎢ ⎥⎣ ⎦

is regular. Adding slack variables 1 2, , , ,y w x xs s s s sξ on the five last constraints in (2.9), we have the following formulation

0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0

0 0 0 0

y

y y

y y

n

k ky w

n n

n n

M I

D DI I

I I

−1

2

11

( , , )

0 00 0 0 0 0 0

(( , ) 0 0 0 0 0 1

x

x

k kk

ky

kw

kxn

kxn

k k xy

Ndxdyw ydw

s ys w

dx xsIb Adx AxsI

f xf x y

µ

ξ

⎡ ⎤ −⎛ ⎞⎢ ⎥⎜ ⎟⎢ ⎥ −Ψ⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥ =⎜ ⎟⎢ ⎥⎜ ⎟⎢ ⎥ +⎜ ⎟⎢ ⎥⎜ ⎟ − −⎢ ⎥⎜ ⎟⎢ ⎥⎜ ⎟ −∇⎝ ⎠∇⎢ ⎥⎣ ⎦, )

k

k k

b

y dx sξ

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟ =⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟−⎝ ⎠

Page 8: A Globally Convergent Sequential Linear Programming

8

8

( )1 2, , , , , ,tk

y w x x kA dy dw s s s s bξ⇔ = , where

0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0

0 0 0

y

y y

y y

n

k ky w

n n

kn n

M I

D DI I

A I I

= −

1

.

0 0 00 0 0 0 0 0

( , ) 0 0 0 0 0 1

x

x

n

n

k ky

II

f x y

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥∇⎢ ⎥⎣ ⎦

As kQ is regular, the matrix kA is also regular. Using the substitution

( )1 2 1, , , , , , ( )t k

y w x x kdy dw s s s s A bξ −= ,

we can reduce ( ( , )kkLP z µ ) to an equivalent linear program in the variables dx only and with the

constraint kx dx X+ ∈ . Since 0dx = trivially satisfies the tree last constraints because kx X∈ , it follows that ( ( , )k

kLP z µ ) has unique optimal solution. ■ 3. An updating method for perturbeb Fisher-Burmeister functional

Expanding to the first order terms the function 2:f +→R R defined by ( , ) .f w y w y= , we introduce a method that permits to update at any iteration, the parameter used in the perturbed Fisher-Burmeister function, on the basis of the only data of the problem solved. Let { } k k

k Kw y

∈be

a sequence of scalar product such that, for all ,k K∈ 0kw ≥ and 0ky ≥ . The following proposition establishes a necessary and sufficient condition to variations and dw dv so that the series { }k k

k Kw y

∈ is decreasing.

Proposition 3.1. Let { }k k

k Kw y

∈be a sequence of scalar product such that, for all ,k K∈ 0kw ≥

and 0ky ≥ , and let’s assume that there exists 0τ > sufficiently small such that 1 , k kw w dwτ+ = + 1k ky y dyτ+ = + . A necessary and sufficient condition so that the sequence

{ }k kk K

w y∈

decreases is that one has: 0k kw dy y dw+ ≤ .

Proof. As 0τ > is closed to zero, it is sufficient to note that one has:

Page 9: A Globally Convergent Sequential Linear Programming

9

9

1 1 + ( ) ( )k k k k k kw y w y w dy y dw oτ τ+ + = + + . ■

Let { }1,2,..., yJ n= and i J∈ ; let’s consider the perturbed Fisher- Burmeister functionΦ .

We now show that, if the sequence { }k ki i k K

w y∈

is decreasing and converge to zero, and if

( )k max k ki I i iw yµ ∈= for all k K∈ , then the sequence { }( , , )k k

i i k k Kw y µ

∈Φ has positives values, is

increasing and converge to zero. The following proposition expresses this result.

Proposition 3.2. Let i J∈ and let’s consider the sequence { }k ki i k K

w y∈

with 0 and 0k ki iw y≥ ≥ .

Let ( , , )k k ki i i kw y µΦ = Φ , ( , , )i idw dy dµ and 0τ > close to zero such that

1 1

1( , , ) ( , , )k k k ki i k i i i i kw y w dw y dy dµ τ τ µ τ µ+ +

+ = + + + . Assume one has: i) ( )k max k k

i I i iw yµ ∈= for all k K∈ ;

ii) the sequence { }k ki i k K

w y∈

is decreasing and converge to zero. Then, one has: a) For all i J∈ , the variation verify 0.k k

i i i id w dy y dw dµ µ+ − ≥ b) For all i J∈ , the sequence { }k

i k K∈Φ is increasing and converge to zero.

Proof.

a) Let i J∈ be an index, and let { }k ki i k K

w y∈

be a sequence such that 1 1

1( , , ) ( , , )k k k ki i k i i i i kw y w dw y dy dµ τ τ µ τ µ+ +

+ = + + + where 0τ > is closed to zero. One may have: 1 1 ( ) ( )k k k k k k

i i i i i i i iw y w y w dy y dw oτ τ+ + = + + + . As 1 1k k k ki i i iw y w y+ + ≤ , one has

0k ki i i iw dy y dw+ ≤ .

In the same way, since 1k kµ µ+ ≤ , one has 0dµ ≤ . One can find 0 0 and i J j J∈ ∈ such that

0 0 0 0 0 01 1

1 max ( )

max min ( ) .

k k k k k kk i I i i i i j j j j

k k k ki I i i j I j j j j

k

w y w y w dy y dw

w y w dy y dwd

µ τ

τ

µ τ µ

+ ++ ∈

∈ ∈

= = + +

= + +

= +

From above equalities, one deducts that min k k

j I j j j jd w dy y dwµ ∈= + . This completes the proof.

Page 10: A Globally Convergent Sequential Linear Programming

10

10

b) Let k K∈ ; as ( )k max k ki I i iw yµ ∈= , one has 0k

iΦ ≤ for all i J∈ . Expanding the function

iΦ to first order term on the neighborhood of ( , , )k ki i kw y µ , one has:

1

2 2 2 2

2 2

( ) ( )

((1 ) (1 )( ) ( ) 2 ( ) ( ) 2

) ( ).( ) ( ) 2

k k kk k i i ii i i i

i ik k

k i ii i ik k k k

i i k i i k

k ki i k

dw dy d ow y

w ydw dyw y w y

d ow y

τ µ τµ

τµ µ

µ τµ

+ ∂Φ ∂Φ ∂ΦΦ = Φ + + + +

∂ ∂ ∂

= Φ + − + −+ + + +

− ++ +

Replacing by k k

k i iw yµ , as k kk i iw yµ ≥ , for all i J∈ , one has

1

k kj j j jk k

i i k ki i

w dy y dw dw y

µτ+ + −

Φ ≥ Φ ++

.

But 0k k

j j j jw dy y dw dµ+ − ≥ according to a). Hence 1k ki i+Φ ≥ Φ .

The sequence { }ki k K∈

Φ is increasing and one has 0 for all ki iΦ ≤ . Since the sequence

{ }k ki i k K

w y∈

decreases and converge to zero, then sequence{ }k k Kµ∈

converges also to zero; but

iΦ being a NCP function, the sequence { }ki k K∈

Φ converges to zero. ■

The above result shows that, only the data of the problem solved may be used to update the perturbed Fisher-Burmeister functional. While using 1k kµ βµ+ = (with [ [0,1β ∈ ) to update the parameter µ of that functional (see [11, 20]), as 0

kkβ µ µ= and 0ε > small enough, we have to

solve at least 0

Itmin ln( ) / ln( )ε βµ

= subproblems and leads to kµ ε≤ to compute a an optimal

solution, no matter is the size of the data.

4. Solving a linear MPEC or a complementarity problem using the SLP algorithm

The SLP algorithm constitutes an excellent tool to solve MPEC or complementarity problems with linear constraints. In SLP algorithm, our linearization method takes into account that the variables are positive, contrarily to SQP algorithm in [20].

Page 11: A Globally Convergent Sequential Linear Programming

11

11

4.1. Solving the linear MPEC problem

Let ( , , )MPEC M N q be the MPEC problem (2.1), and ( , , )k k k kz x y w= be a feasible solution of problem ( , , , )kMPEC M N q µ . The SLP algorithm computes a descent direction

( , , )dz dx dy dw= , while solving the problem ( ( , )kkLP z µ ). Let ( , , )z x y w µ= ∈F . When 0µ → , a

tangent vector to µF at z is the limit dz of any series { }( ) /kkz z τ− where

( , , )k k k kz x w y µ= ∈F and { }kτ is a sequence of positives scalars. A stationary point of the

problem ( ( , , )MPEC M N q ) as defined in [29] is a vector 0z ∈F such that 1( , ) ( , ) 0tx y

dxf x y

dy⎛ ⎞

∇ ≥⎜ ⎟⎝ ⎠

for any ( , , )dz dx dy dw= tangent to 0 at zF . From property (2.6) of the perturbed Fisher-Burmeister function, one has 0 .KKT=F F The principle of SLP algorithm, applied to solve each subproblem ( ( , , , )kMPEC M N q µ ) consists in computing a descent direction ( , )dx dy using a feasible solution kkz µ∈F ; then at each iteration, one solves the problem ( ( , )k

kLP z µ ) which is a tangential approximation ( ( , , , )kMPEC M N q µ ) at the neighborhood of the current solution kz . A descent direction ( , , )dz dx dy dw= is such that, 0 and 0k kw dw y dyτ τ+ ≥ + ≥ , where *τ +∈R is the displacement step.

We are now ready to formally state the SLP algorithm for solving the MPEC (2.1).

SLP Algorithm: solving ( , , )MPEC M N q Step 0 (Initialization)

Let 0 and 0ε β> > be sufficiently small fixed constants, and let 0 0 0 0( , , )z x y w= be a feasible solution of the relaxation of ( , , )MPEC M N q . Set { }0 0

0 max i i iw yµ = , 0k = and go to Step 1.

Step 1 (Updating and displacement step) Let kz be a feasible solution of problem ( , , , )kMPEC M N q µ . i) Compute a descent direction kd from the point kz , by solving the program

( , )kkPL z µ whose optimal value is *ξ . If ( * or 0) and kdzξ ε µ ε≤ = > go to

Step 2. ii) Perform Armijo line search to compute τ such that kk kz d µτ+ ∈F and

1 1( ) ( )k k kf z d f zτ+ ≤ . Set 1k k kz z dτ+ = + , then update { }1 max k kk i i iw yµ + = , go

to Step3. Step2 (Zero displacement direction)

Set 1k kz z+ = , 1k kµ βµ+ = , 1k k= + and go to Step 1. Step 3 (Termination Check)

Page 12: A Globally Convergent Sequential Linear Programming

12

12

If *kµ ξ ε+ ≤ or k2

(w , , ) *kky µ ξ εΨ + ≤ , then stop; kz is a stationary point of

( , , , )kMPEC M N q µ . If kµ ε≤ or k

2(w , , )k

ky µ εΨ ≤ , then stop; kz is a feasible solution of

( , , , )kMPEC M N q µ . Else, set 1k k= + and go to Step 1. The stopping criteria in SLP algorithm takes into account, both the convergence to zero of

the sequence { }kµ and the optimal value of the program ( , )kkPL z µ . Moreover, when the

computed displacement direction is equal to zero, and when the current solution kz is not a stationary point, we suggest that the parameter kµ should be updated as in [11, 20]. In the worse case, the SLP algorithm stops with a feasible solution of an MPEC, that is when kµ ε≤ and

* 0ξ ≠ .

Let * *lim ( , ) ( , )k ky w y wk

D D D D→∞

= ; for an arbitrary matrix A, I and J arbitrary subsets of rows and

columns indexes respectively, we denote by IJA the submatrix of A whose rows and columns are indexed by I and J. Let *{ : ( ) 0}y iiL i D≡ = . According to proposition 3.2, [20], if the principal submatrix LLM is non singular, then the limiting matrix

* ** yn

y w

M IQ

D D

−⎡ ⎤= ⎢ ⎥⎢ ⎥⎣ ⎦

is non singular. We are now able to state and prove the convergence theorem for SLP algorithm.

Theorem 3.1. For a given 0kµ > with 1,2,...k = , let’s consider the sequence of subproblems ( ( , , , )kMPEC M N q µ ) such that M is a P0-matrix. Let kkz µ∈F be a feasible solution of ( ( , , , )kMPEC M N q µ ) such that assumption (H1) is satisfied. Assume that the value of the linear programming ( ( , )k

kLP z µ ) is * 0ξ = when lim 0kkµ

→∞= , and that the principal submatrix LLM is non

singular where *{ : ( ) 0}y iiL i D≡ = . Then, when ,k →∞ the optimal value of ( ( , )kkLP z µ ) is such

that, there exist KKT multipliers 0 0, ( , ) ,y yn nk k kα α β≥ ∈ ×R R ( , , ) y yx n nnk k kx w yγ γ γ + + +∈ × ×R R R

for problem ( ( , , , )kMPEC M N q µ ). Moreover, lim k

kz z

→∞= is a stationary point of ( ( , , )MPEC M N q ).

Proof. For 1k ≥ , let 0 0, ( , ) ,y yn nk k kα α β≥ ∈ ×R R ( , , ) y yx n nnk k k

x w yγ γ γ + + +∈ × ×R R R be the dual variables for the problem ( , )k

kLP z µ (when it has a solution), its dual follows:

Page 13: A Globally Convergent Sequential Linear Programming

13

13

( , )kkDLP z µ :

10

10

min ( , , ) ( )

( , ) 0,

( , ) 0,

. .

x

y

y y

k k k k k k k k k kk x w y

k k k k k kx n x

k k k k k k ky y n y

k k k kn w w n

w y x w y b Ax

N I A f x y

M D I f x y

s t I D I

β µ γ γ γ λ

α γ λ α

α β γ α

α β γ

− Ψ + + + + −

− + + ∇ =

+ − + ∇ =

− + −

0

0

0,

1,

0, ( , ) , ( , , ) .y y y yx

k

n n n nnk k k k k kx y w

α

α α β γ γ γ + + +

⎧⎪⎪⎪⎪ =⎨⎪

≥⎪⎪

≥ ∈ × ∈ × ×⎪⎩ R R R R R

(3.1)

From the last constraint in (3.1), we have 0 0kα ≠ . Assume that

0 0lim ( , , , , , , ) ( , , , , , , )k k k k k k kx w y x y wk

α α β γ γ γ λ α α β γ γ γ λ→∞

= .

When k →∞ , as * lim k

kQ Q

→∞= is a regular matrix, * lim k

kA A

→∞= is also a regular matrix. So, the

constraints in (3.1) can be transformed into the following system

1

1 *

0*

0 0 0 ( , ) 1( , ) 0 0 0

0 0 0 0

x

y

y y

nxy

y y nw

n w n x

N I Af x y

f x y M D I

I D I

αβγ

α γγ

λ

⎡ ⎤⎢ ⎥

⎡ ⎤ ⎢ ⎥−⎡ ⎤∇ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥∇ = − −⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ − −⎣ ⎦ ⎣ ⎦ ⎢ ⎥⎢ ⎥⎣ ⎦

.

From the duality theorem and the proposition 2.1, we conclude that, at the optimality of ( ( ,0)LP z ), there exists a feasible solution to the system of constraints (3.1) when k →∞ . Hence, there exists a vector of KKT multipliers for the problem ( ( , , ,0)MPEC M N q ), that coincides with the vector of the dual variables of the problem ( ( ,0)LP z ). As * 0ξ = and ( , ,0) 0i iw yΦ = for all 1,2,..., yi n= , at the optimality, from duality theorem and complementarity relations at optimality, one has

* ( , ,0) ( ) 0.x w yw y x w y b Axξ β γ γ γ λ= − Ψ + + + + − =

Hence, when 0kµ → and * 0ξ = , z is a stationary point of the MPEC problem ( ( , , )MPEC M N q ). ■

Contrary to the SQP algorithm of Fukushima and al. [20], the constraints of non negativity of the variables are taken in account in our method.

Page 14: A Globally Convergent Sequential Linear Programming

14

14

4.2. Solving a complementarity problem

We now consider the general complementarity problem that consists in finding a feasible solution to the following system:

GCP:

0 ( , ) 0,( , ) ,

y H x yx y X Y≤ ⊥ ≥⎧

⎨ ∈ ×⎩ (4.1)

where : ,y yx n nnH + + +× →R R R { }: with and x xn m n mX x Ax b A b×

+≡ ∈ ≤ ∈ ∈R R R , ynY +≡ R . We assume that for a fixed x X∈ , the function ( , )y H x y→ is a P0-function, i.e., ( , )yH x y∇ is a P0-matrix. Let GCPF be the set of feasible solutions of the problem GCP, assumed to be nonempty. Relaxing the complementarity constraints in (GCP), we have a relaxed problem with GCP

rF as a set of feasible solutions. When the function ( , )H x y is affine, one can write it as ( , )H x y Nx My q= + − where ( , )H x y Nx My q= + − where , , y x y y yn n n n nN M q× ×

+∈ ∈ ∈R R R and M is a P0-matrix. One gets a generalized linear complementarity problem. Let’s use the well-known equivalence (see, e.g., in [12], section 1.5.3) between the complementarity problem, i.e., identify y such that for x X∈ , 0, ( , ) 0y H x y≥ ≥ and ( , ), 0H x y y = , and the convex optimization problem

MP:

min ( , ),( , ) 0,

. .0,

and ( , ), 0.

y H x y y

H x ys t

yH x y y

≥⎧⎨ ≥⎩

=

Hence it is easy to show that ( *, *)x y X Y∈ × is a solution of the MPEC defined by (2.1) if

and only if ( *, *)x y solve the bilevel programming problem BPP1:

1,min ( , )

min ( , ),,

. . . . ( , ) 0,0,

and ( *, *), * 0.

x y

y

f x y

H x y y

x Xs t s t H x y

yH x y y

⎧⎪

∈⎧⎪⎪ ⎪ ≥⎨ ⎨⎪ ⎪ ≥⎩⎪⎪ =⎩

Page 15: A Globally Convergent Sequential Linear Programming

15

15

Moreover, if the matrix M is symmetric, let 2 1( , )2

t t tf x y y My y N y q= + + . It is easy to

show that ( *, *)x y X Y∈ × is a solution of the MPEC defined by (2.1) if and only if ( *, *)x y solve the bilevel programming problem

BBP2:

1,

2

min ( , )

min ( , ),. .

. . ( , ) 0,0,

x y

y

f x y

f x y

x Xs ts t H x y

y

⎧⎪

∈⎪ ⎧⎨ ⎪ ≥⎨⎪

⎪⎪ ≥⎩⎩

As 2 ( , )y f x y My Nx q∇ = + + . There exist many efficient methods in the literature to solve

bilevel programming problems (see [2, 3, 9, 30, 31, 41]). If the complementarity relations in (4.1) are nondegerated (i.e., ( , ) 0i iy H x y+ ≠ for all 1 )yi n≤ ≤ according to assumption (H1), the SLP algorithm may be an excellent tool to solve the problem (GCP). Let’s consider the following program:

QP:

min ( , ),

( , ) 0,. .

0,, , 0,

t

t

f y z y zAx bH x y z

s ty zx y z

=

≤⎧⎪ − =⎪⎨

=⎪⎪ ≥⎩

(4.2)

where z is a slack variable. As stated in proposition 2.5 in Harker and Pang [21], according to Cottle and al. [6], problem (GCP) has a solution if an only if zero is the value of the quadratic program obtained by relaxing the program (QP). Unlike Cottle and al. [6], we have suggested to keep the complementarity constraint 0ty z = . If the optimal value of the program (QP) is zero, then, the corresponding solution is a solution of the complementarity problem (GCP). Let's assume that the problem (GCP) admits a feasible solution ( , , )w x y z= and that the set of active constraints satisfies the linear independence qualification constraints on w . The result below is well known; it expresses necessary optimality conditions for the problem (QP).

Page 16: A Globally Convergent Sequential Linear Programming

16

16

Proposition 4.1. Let ( , , ) GCPw x y z= ∈F such that the complementarity relation 0 0y z≤ ⊥ ≥ is nondegenerate ( 0, 1,..,i i yy z i n+ ≠ = ). w is a stationary point for the problem (QP) if there exists

KKT multipliers , ,ynmλ µ η+∈ ∈ ∈R R R such that:

( , ) 00 ( , ) ( 1) 0,0

0, ( ) 0.y

x

y

n

H x yAH x y z

yI

Ax b

λ µ η

λ λ

⎛ ⎞∇⎛ ⎞ ⎛ ⎞⎜ ⎟⎜ ⎟ ⎜ ⎟+ ∇ + + =⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟−⎝ ⎠ ⎝ ⎠⎝ ⎠≥ − =

(4.3)

To solve the program (QP), as in [2, 4, 20, 22], the equilibrium constraint 0 0y z≤ ⊥ ≥ is

perturbed, as previously with an MPEC. One gets the following system of constraints: ,

( , ) 0,( , , ) 0,

, , 0,

Ax bH x y z

y zx y z

µ

≤⎧⎪ − =⎪⎨Ψ =⎪⎪ ≥⎩

(4.4)

where the components of : y y yn n n

+ + +Ψ × × →R R R R are ( , , ), 1,2,...,i i yy z i nµΦ = . Let ( , )kkw µ

be a feasible solution of system (4.4), where ( , , )k k k kw x y z= . We use the following notations: ( , )k k kH H x y= , ( , , )k k k

ky z µΨ = Ψ . and k k k ky y z zD D∇ Ψ ≡ ∇ Ψ ≡ are diagonal matrices with

respective diagonal entries

2 2 2 21 and 1 , 1,..., .

( ) ( ) 2 ( ) ( ) 2

k ki i

yk k k ki i k i i k

z y i nz y z yµ µ

− − =+ + + +

(4.5)

If ( , ) lim ( , )k k

ky z y z

→∞= and * *( , ) lim ( , )k k

y z y zkD D D D

→∞= , as 0ty z = we have:

* *1 if 0 0 if 0

( ) and ( ) 0 if 0 1 if 0.

i iz ii y ii

i i

z zD D

y y= =⎧ ⎧

= =⎨ ⎨= =⎩ ⎩ (4.6)

A descent direction related to problem (QP) is computed for each iteration of SLP algorithm

by solving the following linear programming problem:

Page 17: A Globally Convergent Sequential Linear Programming

17

17

( , )kkLP w µ :

, , ,max

( ) ( ) 0,,

. .,

0

( , ) , , 0.

y

y y

dx dx dz

k t k t

k

k k k kx y n

kk ky z

n nk k k

z dy y dzAdx b Ax

dxH H Is t H zdy

D D dz

dx x dy y X dz z

ξ ξ

ξ

ξ+ +

⎧ + + ≤⎪

≤ −⎪⎪ ⎛ ⎞⎪⎛ ⎞∇ ∇ − ⎛ ⎞− +⎨ ⎜ ⎟⎜ ⎟ = ⎜ ⎟⎪ ⎜ ⎟ ⎜ ⎟⎜ ⎟ −Ψ⎝ ⎠⎜ ⎟⎝ ⎠⎪ ⎝ ⎠⎪⎪ + + ∈ × + ∈ ≥⎩ R R

(4.7)

According to theorem 3.1, under assumption (H1), the proposition below shows that every

program ( ( , )kkLP w µ ) has a unique optimal solution.

Proposition 4.2. Let (QP) be an MPEC deducted from a linear complementarity problem, and

( , , )k k k k GCPrw x y z= ∈F . If the function ( , )y H x y→ is a P0-function, then the linear programming

program ( ( , )kkLP w µ ) has a unique optimal solution.

Let ( *, , , )dx dy dzξ be an optimal solution of the program ( ( , )k

kLP w µ ); its simplex multipliers 1, , ( , ) y yn nk k m k kα λ µ η+≥ ∈ ∈ ×R R R , are computed from the dual of ( ( , )k

kLP w µ ) whose formulation is given by:

( , )k

kDLP w µ :

, , ,min ( ) ( )

0 0 00 0

. . 0 0

1, ,( , ) .

k k k k

y

y y

k k k k k k k

kx

k k k k k k ky y

k kn z

n nk k m k k

b Ax H z

HAz H D

s cy I D

α λ µ ηλ µ η

α λ µ η

α λ µ η+

− − − − Ψ

⎧ ⎛ ⎞ ⎛ ⎞∇⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎪ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟+ + ∇ + =⎪ ⎜ ⎟ ⎜ ⎟⎪ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎨ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟⎜ ⎟ ⎜ ⎟−⎝ ⎠ ⎝ ⎠⎝ ⎠⎪ ⎝ ⎠⎝ ⎠⎪

≥ ∈ ∈× ×⎪⎩ R R R

(4.8)

- If * 0,ξ > one has 1( ) 0

ynk kj j j j

jy dz z dy

=

+ ≤∑ and ( , , )dx dy dz is a descent direction.

- If * 0 and lim 0,kkξ µ

→∞= = let ( , ) lim ( , )k k

kx y x y

→∞= be a stationary point of the problem

(QP). Let ( , , , ) lim ( , , , )k k k k

kα λ µ η α λ µ η

→∞= ; 1, , ( , ) y yn nmα λ µ η+≥ ∈ ∈ ×R R R satisfying (4.8)

are the KKT multipliers of the problem (QP). One has ( ) 0;b Axλ − = ( ( , ) ) 0H x y zµ − = and ( , ,0) 0z yηΨ = . Necessarily, we have ( , ) 0f x y = .

Page 18: A Globally Convergent Sequential Linear Programming

18

18

At optimality, when k →∞ , i.e. 0kµ → , assume that 1α = . From the last bloc of

equality in (4.3), we have: 0 0i iy µ= ⇒ = and 0 ( 1)i i iz yµ η= ⇒ = + . And from the last bloc of equality in (4.9), we have: 0 0i iy µ= ⇒ = and 0i i i iz yµ η= ⇒ = as from (4.6), we have

*( ) 0z iiD = . Now using (4.6), set

t0' z

yη η

⎛ ⎞⎜ ⎟= ⎜ ⎟⎜ ⎟⎝ ⎠

; we have *

*

0 0' y

z

D zyD

η

⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟=⎜ ⎟ ⎜ ⎟

⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠

. Then the system of

constraints (4.3) and (4.8) are equivalent at optimality. Hence, a stationary point of (QP) is a solution of (GCP).

The approach of resolution of the problem (GCP) below may be used to compute of a

rational solution of the bilevel problem, using its KKT formulation. 5. Computational experiments

We use data from [20] for numerical experimentations. 5.1. The problems

Using the SLP algorithm, we have solved MPEC problems with linear equilibrium constraints, and complementarity problems of type (5.1). For all problems solved, the matrix M is a P0-matrix. The problems that carried our computation experiments are the next one: Problem 1. This problem has one upper-level variable and one lower-level variable.

21 1min 95

2 20 100,

. . 10 (2 100) 0.2

x xy

xs t

y y x

− −

≤ ≤⎧⎪⎨

≤ ⊥ + − ≥⎪⎩

Problem 2. This problem is an affine variational inequality as its lower level constraint, cast in the form (2.1), the data are as follows:

[ ] [ ] 2 21 2 1 1 2 2

10,10 0,10 , ( , ) ( 15) ( 15)2

36 8/3 2 2 8 / 3, , M .

25 2 5 / 4 5/4 2

X f x y x x y x x y

q N

⎡ ⎤= × = + + − + + + −⎣ ⎦

⎛ ⎞ ⎛ ⎞ ⎛ ⎞= = =⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠ ⎝ ⎠

Page 19: A Globally Convergent Sequential Linear Programming

19

19

Problem 3. This is a set of several problems with data generated as follows. The objective function is given as

1( , )2

t tf x y x x e y= + where (1,..,1) ynt te = ∈R .

The entries of different matrices are randomly generated (using MATLAB generator) for

30% of density and satisfy some technical constraints in order to ensure that the optimal solution is given by the zero vector. Therefore:

- The pair (A,b) is such that the -vectorxn 0 (1,...,1) xntx = ∈R satisfies the constraint 0Ax b≤ .

- The matrix M is strictly diagonally dominant with off-diagonal entries being random numbers between 0 and 1.

- The elements A and N are chosen between -50 and 50, while the diagonal elements of M, entries of b and q are taken from interval [ ]0,50 .

The combination ( , , ) with x y p x yn n m n n n= + characterizes the size of problems. The initial solution depends on the problem solved. Considering a formulation of the constraints of type (2.1), the initial solution of the MPEC problem may be:

i) 0 0(0,..,0) , y (1,..,1) ;yx nnt tx = ∈ = ∈R R ii) 0 0(1,..,1) , y (0,..,0) ;yx nnt tx = ∈ = ∈R R

iii) 0 0(1,..,1) , y (1,..,1) yx nnt tx = ∈ = ∈R R0

00

0and

b A xz

q N M y⎛ ⎞⎛ ⎞ ⎛ ⎞

= − ⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟⎝ ⎠ ⎝ ⎠⎝ ⎠,

while the initial solution for the complementarity problem is: 0

0 0 0 0 00

0(0,..,0) ( or (1,...,1) ), , y (1,..,1) and yx nnt t t b A x

x x x zq N M y

⎛ ⎞⎛ ⎞ ⎛ ⎞= = ∈ = ∈ = − ⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟⎝ ⎠ ⎝ ⎠⎝ ⎠

R R .

The computational experience in [20] considered only initial solution given by ii).

5.2. The results of computational experiences

The SLP algorithm and the SQP algorithm in [20] (FLP-SQP) have been coded in MATLAB and extensively tested on a PC Pentium 4 (processor 3.2 GHZ, 1.24 GB of RAM).

a. Running the SLP algorithm

The computational results carrying on MPEC problems are summarized in table 5.1. The

SLP algorithm is stopped when one finds * kξ µ ε+ ≤ where * lim 0.kkµ µ

→∞= ≅ Implicitly, one has

Page 20: A Globally Convergent Sequential Linear Programming

20

20

*ξ ε≤ and kµ ε≤ . That means the SLP algorithm found a solution ( , , ) lim ( ( , , ))k k k k

kz x y w z x y w

→∞= = = being the best approximation of a stationary point for

the problem solved. For problem 3, as the data are randomly generated, for a fixed ( , , )x y pn n m , we tested the SLP algorithm on 10 sets of problems in these series. As cpu times, the average number of iterations is calculated from each of these 10 sets of problems. The initial solution given by

0 0( , )x y : 0 0( , ) (0,1)x y = means that all the 0x are equal to 0, and those of 0y equal to 1. In the same way for 0 0( , ) (1,1)x y = or 0 0( , ) (1,0)x y = .

Using 0 0( , ) (1,0)x y = as initial solution for each series in problem 3, we obtain with the SLP algorithm the same optimal solution as the FLP-SQP algorithm, but with a lower number of iterations. However, using 0 0( , ) (1,1)x y = or 0 0( , ) (0,1)x y = as initial solution, the SLP algorithm computed most of the time a feasible solution ( *µ ε≤ with * 0ξ ≠ ) 8 times over 10. As we mention above, such a solution is not a stationary point of the problem ( , , )MPEC M N q .

The SLP algorithm proved to be especially efficient to solve the linear complementarity

problem; the computational results carrying on these problems are summarized in table 5.2. We recall that the data for the problem of ( , , , )x y pn n n m are the same as in problem 3 above. We found systematically zero, as the optimal value of the problem (QP) as well as * 0µ ≅ and * 0ξ ≅ for the optimum value of the program ( ( , )k

kPL w µ ).

Name Size: ( , , , )x y pn n n m Average number of iterations

0 0( , )x y

Cpu

Problem 1 (4,2,2,2) 6 (0,1) 0.90 Problem 1 (4,2,2,2) 6 (1,1) 0.50 Problem 2 (2,1,1,1) 10 (0,1) 1.13 Problem 2 (2,1,1,1) 10 (1,1) 1.13 Problem 3 (50,25,25,8) 10 (0,1) 1.15 Problem 3 (50,25,25,8) 9 (1,1) 1.14 Problem 3 (80,40,40,13) 8 (1,1) 1.75 Problem 3 (80,40,40,13) 12 (0,1) 2.05 Problem 3 (100,50,50,17) 14 (0,1) 2.20 Problem 3* (100,50,50,17) 16 (1,1) 2.37 Problem 3** (150,75,75,25) 17 (0,1) 4.73 Problem 3 (150,75,75,25) 15 (1,1) 3.87

Table 5. 1: Performance of the SLP algorithm (MPEC problems)

Page 21: A Globally Convergent Sequential Linear Programming

21

21

Size: ( , , , )x y pn n n m Average number of iterations

0 0( , )x y

(50,25,25,8) 7 (0,1) (50,25,25,8) 4 (1,1)

(80,40,40,13) 8 (1,1) (80,40,40,13) 5 (0,1) (100,50,50,17) 9 (1,1) (100,50,50,17) 4 (0,1) (150,75,75,25) 10 (1,1) (150,75,75,25) 6 (0,1)

Table 5. 2: Performance of the SLP algorithm (complementarity problems)

Figure 5. 1: Evolutions of the value of kµ (*) and 2

kΨ (+) according to k (Problem 3*)

The figures 5.1 and 5.2 represent curves of the evolution of kµ (curve of points (*)), and

that of 2

kΨ (curve of points (+)) depending on k. One can observe on these curves that the

functions 2

( , ) and ( , ) max ( )k k k k k k kk i I i iz y z y z yµ ∈→ Ψ → = vary in an identical manner; it

confirms the exactness of the choice of the actualization of the parameter kµ that we proposed experimentally.

Page 22: A Globally Convergent Sequential Linear Programming

22

22

Figure 5. 2: Evolutions of the value of kµ (*) and 2

kΨ (+) according to k (Problem 3**)

b. Computational experience with the Fukushima, Luo and Pang [20] SQP

algorithm (FLP-SQP)

Table 5.3 represents the computational results of the FLP-SQP algorithm. We used the MATLAB code of the initial version of this algorithm. We pointed out that, no matter the size of the problem 3, the minimal number of iterations depends on the stopping criteria ε . While considering the initial solution 0 0( , ) (0,1)x y = to solve problem 3, the FLP-SQP algorithm, 3 times over 10 could not compute the initial solution used in MATLAB library to determine the descent direction. With the initial solution 0 0( , ) (1,1)x y = , the FLP-SQP algorithm compute stationary points on problem 3 series 1( ( *, *) 0)f x y ≠ . Using the initial solution 0 0( , ) (1,0)x y = , the SLP and the FLP-SQP algorithms computed a global optimum of series of problem 3.

For problems of small size 1 and 2, we fixed 0 0.6 and 0.2µ β= = to accelerate iterative process. However, the FLP-SQP algorithm was unable to solve these problems, considering the initial solution 0 0( , ) (1,1)x y = or 0 0( , ) (0,1)x y = . For the series (100,50,50,17) (respectively (150,75,75,25)) relative to problem 3, the average number of iterations was 40, and one observes that the optimal solution was computed since the 20th iteration (respectively 30th iteration) with some test problems in the series. It confirms the arbitrary character of the actualization of the parameter µ used in the perturbed Fisher-Burmeister functional.

Page 23: A Globally Convergent Sequential Linear Programming

23

23

Name Size ( , , , )x y pn n n m

Average number of

iterations

0 0( , )x y

0( , )µ β

CPU

Problem 1 (4,2,2,2) 17 (0,0) (0.6,0.2) 0.25 Problem 2 (2,1,1,1) 23 (0,0) (0.6,0.2) 0.26 Problem 1 (4,2,2,2) 17 (1,0) (0.6,0.5) 0.17 Problem 2 (2,1,1,1) 23 (1,0) (0.6,0.5) 0.21 Problem 3 (50,25,25,8) 35 (1,0) (0.6,0.5) 3.11 Problem 3 (50,25,25,8) 31 (1,1) (0.6,0.5) 3.51 Problem 3 (80,40,40,13) 37 (1,0) (0.6,0.5) 9.46 Problem 3 (80,40,40,13) 31 (1,1) (0.6,0.5) 8.93 Problem 3 (100,50,50,17) 38 (1,0) (0.6,0.5) 13.7 Problem 3 (100,50,50,17) 41 (1,1) (0.6,0.5) 23.73 Problem 3 (150,75,75,25) 35 (1,0) (0.6,0.5) 39.40 Problem 3 (150,75,75,25) 40 (1,1) (0.6,0.5) 80.70

Table 5. 3: Performance of the FLP-SQP algorithm (MPEC problems)

6. Conclusion

Our numerical experiments show a relative efficiency of the SLP algorithm, compare to the SQP algorithm of Fukushima, Luo and Pang [20] that it would be necessary to remember, gives some results similar to the algorithm PIPA of Luo, Pang and Ralph [29]. Besides, the SLP algorithm proved to be an excellent tool to solve the generalized linear complementarity problems.

While combining a method of computing a feasible solution of a MPEC problem (that is a solution of a complementarity problem) introduced in this paper with a cutting plane procedure, we may have an algorithm to solve bilevel problems. However, one may find the way to deal with degenerated solutions. The SLP algorithm could then be adapted to solve the general case of the convex bilevel problems, but it would require analytic adjustments. While incorporating an efficient enumeration method carrying on equilibrium constraints, the algorithm SLP applied to the resolution of the problems of MPEC may be globally convergent. These may constitute subjects of future researches. Acknowledgments

We are grateful to professor Gilles Savard3 and professor Patrice Marcotte4 for their constructive review and useful comments.

3 MAGI and GERAD, École Polytechnique de Montréal, C.P. 6079, succ. Centre-ville Montréal (Québec), H3C 3A7 Canada (Gilles.Savard @gerad.ca). 4 DIRO and CRT, Université de Montréal ([email protected]).

Page 24: A Globally Convergent Sequential Linear Programming

24

24

References 1. G. Anandalingam and T. L. Friez, Hierarchical Optimization: An Introduction, Annals of

Operations Research 3 (1992) 1-11. 2. R. Andreani and J. M. Martinez, On the Solution of the Extended Linear Complementarity

Problems, Linear Algebra and its Applications 281 (1998) 247-257. 3. J. F. Bard, Practical Bilevel Optimization: Algorithms and Applications, Dordrecht,Kluwer

Academic Publishers, 1998. 4. J. Burke and S. Xu, A Non-interior Predictor-Corrector Path Following Algorithm for the

Monotone Linear Complementarity Problem, Mathematical Programming, Ser., A 87 (2000) 113-130.

5. B. Chen and P. Harker, A Non-interior-point Continuation Method for Linear Complementarity Problems, SIAM J. Matrix Anal. Appl. 14 (1993) 1168-1190.

6. R. W., Cottle, J.S. Pang and V. Venkateswaran, Sufficient matrices and linear complementarity problem, Linear Algebra and its Applications, 114/115 (1989) 231-249.

7. S. Dempe, Annoted Bibliography on Bilevel Programming and Mathematical Programs with Equilibrium Constraints, Optimization 52 (2003) 333-359.

8. S. Engelke and C. Kanzow, Improved Smoothing-type Methods for the Solution of Linear Programs, Numer. Math. 90 (2002) 487-507.

9. J. B. Etoa Etoa, Contribution à la résolution des programmes mathématiques à deux niveaux et des programmes mathématiques avec contraintes d’équilibre, PhD Thesis, École Polytechnique de Montréal, 2005.

10. F. Facchinei and J. Soares, A New Merit Function for Nonlinear Complementarity Problems and Related Algorithm, SIAM Journal on Optimization 7 (1997) 227-247.

11. F. Facchinei, H. Jiang and L. Qi, A Smoothing Method for Mathematical Programs with Equilibrium Constraints, Mathematical Programming, 85 (1999) 107-133.

12. F. FACCHINEI and J. PANG, Finite-dimensional variational inequalities and complementarity problems, Springer, New York, 2003.

13. M. C. Ferris and J. S. Pang, Engineering and Economic Application of Complementarity Problems, SIAM Review 19 (1997) 669-713.

14. M. C. Ferris and C. Kanzow, Complementarity and related problems, A Survey, Dipartimento di Informatica e Sistemistica, Università di Roma La Sapienza, Via Buonarroti, 12, 00185 Roma, working paper, 1998.

15. I.N. Figueiredo, J.J. Jùdice and S.R. Silvério, A Class of Mathematical Programs with Equilibrium Constraints: A Smooth Algorithm and Applications to Contact Problems, Optimization and Engineering 6:2 (2004) 203-239.

16. A. Fisher. An NCP-function and its use for the solution of complementarity problems, in Recent Advances in Nonsmooth Optimization, D.-Z, L. Qi, and R.S. Womersley Eds, World Scientific Publisher: Singapore 17 pp. 88-104, 1994.

17. A. Fisher and H. Jiang, Merit Functions for Complementarity and Related Problems, A Survey, Computational and Application 17 (2000) 159-182.

18. M. L. Flegel and C. Kanzow, Abadie-Type Constraint Qualification for Mathematical Programs with Equilibrium Constraints. Journal of Optimization Theory and Applications 124:3 (2005) 595-614.

Page 25: A Globally Convergent Sequential Linear Programming

25

25

19. R. Fletcher, S. Leyffer, and Ph. L. Toint, On the global convergence of an SLP-filter algorithm. Numerical Analysis Report NA/183, University of Dundee, UK, August, 1998.

20. M. Fukushima, Z. Q. Luo and J. S. Pang, A Global Convergent Sequential Quadratic Programming Algorithm for Mathematical Programs with Linear Complementary Constraints, Computational Optimization and Application 10 (1998) 5-33.

21. P. T. Harker and J. S. Pang, Finite-dimensional Variational Duality and Nonlinear Complementarity Problems: A Survey of Theory, Algorithms and Applications, Mathematical Programming 48 (1990) 161-220.

22. K. Hotta and A. Yoshie, Global Convergence of a Class of Non-interior-point Algorithms using Chen-Harker-Kanzow Function for Nonlinear Complementarity Problems, Mathematical Programming 86 (1999) 105-133.

23. C. Kanzow, Some noninterior continuation methods for linear complementarity problems. Manuscript, Institut of Applied Mathematics, University of Hamburg Hamburg, Germany, 1994.

24. C. Kanzow and M. Fukushima, Equivalence of Generalized Complementarity Problem to Differentiable Unconstrained Minimization, Journal of Optimization Theory and Applications 15 (1995) 581-603.

25. C. Kanzow and H. Jiang, A Continuation Method for Strongly Monotone Variational Inequalities, Mathematical Programming 81 (1998) 103-124.

26. M. K. Kocvara and J. V. Outrata, On Optimization System Governed by Implicit Complementarity Problems, Numerical Functional Analysis and Optimization 15 (1993) 869-887.

27. M. K. Kocvara and J. V. Outrata, On the solution of optimum design problems with variational inequalities, In Recent Advances in Nonsmooth Optimization, D.-Z, L. Qi, and R.S. Womersley Eds, World Scientific Publisher:Singapore, 1994, pp. 172-192.

28. M. K. Kocvara and J. V. Outrata, A nonsmooth approach to optimization problems with equilibrium constraints, in Proceeding of the International Conference on Complementarity Problems, M.C. Ferris and J.S. Pang Eds, Baltimore, Maryland, SIAM Publications, 1994, pp. 148-163.

29. Z.Q. Luo, J. S. Pang and D. Ralph, Mathematical Programs with Nonlinear Complementary Constraints, Cambridge UK:University Press, 1996.

30. P. Marcotte and D. L. Zhu, Exact and Inexact Penalty Methods for the Generalized Bilevel Programming Problem. Mathematical Programming A 74 (1995) 141-157.

31. P. Marcotte, G. Savard and D. L. Zhu, A Trust Region Algorithm for Nonlinear Bilevel Programming, Operations Research Letters 29 (2001) 171-179.

32. M. Minoux, Programmation mathématique: théorie et algorithmes Tome 1, Dunod Bordas et C.N.E.T, E.N.S.T., Paris, 1983.

33. J. J. Moré, Global Methods for Nonlinear Complementarity Problems, Mathematics of Operations Research 21 (1995) 589-613.

34. J. V. Outrata, On Optimization Problems with Variational Inequality Constraints, SIAM Journal on Optimization 4:2 (1993) 340-357.

35. J. V. Outrata and J. Zowe, A Numerical Approach to Optimization with Variational Inequality Constraints. Mathematical Programming 68 (1994) 105-130.

Page 26: A Globally Convergent Sequential Linear Programming

26

26

36. J. V. Outrata, J. Kocvara and J. Zowe, Nonsmooth approach to optimization problems with equilibrium constraints, Encyclopedia of Optimization, Kluwer Academic Publishers, Dordrecht, 1998.

37. O. Pironneau and E. Polak, Rate of Convergence of a Class of Methods of Feasible Directions. SIAM Journal on Numerical Analysis 10:1 (1973) 161-173.

38. N. V. Thoai, Y. Yamamoto and A. Yoshie. Global Optimization Method for Solving Mathematical Programs with Linear Complementarity Constraints. Journal of Optimization Theory and Applications 124:2 (2004) 467-490.

39. D. M. Topkis and A. F. Veinott, On the Convergence of Some Feasible Directions Algorithms for Nonlinear Programming. SIAM Journal on Control (1967) 268-279.

40. P. Tseng, Growth Behavior of Class of Merit Functions for Nonlinear Complementarity Problems. Journal of Optimization Theory and Applications, 89 (1995) 17-37.

41. L. N. Vicente and P. H. Calamai, Bilevel and Multilevel Programming, A Bibliography Review, Journal of Global Optimization 5:3 (1993) 291-305.

42. J. Z. Zang and G. S. Liu, A New Extreme Point Algorithm and its Application in PSQP Algorithms for Solving Mathematical Programs with Linear Complementarity Constraints. Journal of Global Optimization (2001) 345-361.