30
Lopsided Convergence: an Extension and its Quantification Johannes O. Royset Roger J-B Wets Operations Research Department Department of Mathematics Naval Postgraduate School University of California, Davis Monterey, California, USA Davis, California, USA [email protected] [email protected] Abstract. Much of the development of lopsided convergence for bifunctions defined on prod- uct spaces was in response to motivating applications. A wider class of applications requires an extension that would allow for arbitrary domains, not only product spaces. This leads to an extension of the definition and its implications that include the convergence of solutions and optimal values of a broad class of minsup problems. In the process we relax the definition of lopsided convergence even for the classical situation of product spaces. We now capture appli- cations in optimization under stochastic ambiguity, Generalized Nash games, and many others. We also introduce the lop-distance between bifunctions, which leads to the first quantification of lopsided convergence. This quantification facilitates the study of convergence rates of methods for solving a series of problems including minsup problems, Generalized Nash games, and various equilibrium problems. Keywords: lopsided convergence, lop-convergence, lop-distance, Attouch-Wets distance epi-convergence, hypo-convergence, minsup problems, Generalized Nash games AMS Classification: 49M99, 65K10, 90C15, 90C33, 90C47 Date: April 11, 2018 1 Introduction The notion of lopsided convergence of bifunctions (= bivariate functions defined on a product space) was introduced in [3] for extended real-valued bifunctions. The focus on extended real- valued functions was motivated by the incentive to keep the development in concordance with the elegant “duality” results of Rockafellar [25, Chapters 33-37] and the subsequent convergence theory for saddle functions [4, 2, 19]. But this paradigm turned out to become unmanageable 1

Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

Lopsided Convergence: an Extension and its Quantification

Johannes O. Royset Roger J-B Wets

Operations Research Department Department of MathematicsNaval Postgraduate School University of California, DavisMonterey, California, USA Davis, California, [email protected] [email protected]

Abstract. Much of the development of lopsided convergence for bifunctions defined on prod-uct spaces was in response to motivating applications. A wider class of applications requiresan extension that would allow for arbitrary domains, not only product spaces. This leads toan extension of the definition and its implications that include the convergence of solutions andoptimal values of a broad class of minsup problems. In the process we relax the definition oflopsided convergence even for the classical situation of product spaces. We now capture appli-cations in optimization under stochastic ambiguity, Generalized Nash games, and many others.We also introduce the lop-distance between bifunctions, which leads to the first quantification oflopsided convergence. This quantification facilitates the study of convergence rates of methodsfor solving a series of problems including minsup problems, Generalized Nash games, and variousequilibrium problems.

Keywords: lopsided convergence, lop-convergence, lop-distance, Attouch-Wets distanceepi-convergence, hypo-convergence, minsup problems, Generalized Nash games

AMS Classification: 49M99, 65K10, 90C15, 90C33, 90C47

Date: April 11, 2018

1 Introduction

The notion of lopsided convergence of bifunctions (= bivariate functions defined on a productspace) was introduced in [3] for extended real-valued bifunctions. The focus on extended real-valued functions was motivated by the incentive to keep the development in concordance withthe elegant “duality” results of Rockafellar [25, Chapters 33-37] and the subsequent convergencetheory for saddle functions [4, 2, 19]. But this paradigm turned out to become unmanageable

1

Page 2: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

when confronted with a series of applications that required dealing with bifunctions that werenot of the convex-concave type. Eventually, this led to restricting the convergence theory forbifunctions to real-valued bivariate functions (only) defined on specific subsets of the productspace, cf. [21, 28] and especially [20]. Lopsided convergence is emerging as a central tool inthe study of linear and nonlinear complementarity problems, fixed points, variational inequal-ities, inclusions, noncooperative games, mathematical programs with equilibrium constraints,optimality conditions, Walras and Nash equilibrium problems, optimization under stochasticambiguity, and robust optimization; see the recent developments in [21, 28, 29]. Already, Aubinand Ekeland [7, Chapter 6] brought to the fore the ineluctable connections between some ofthese applications when dealing with existence issues.

Prior studies deal exclusively with bifunctions defined on a product space, but applications inGeneralized Nash games, robust optimization, stochastic optimization with decision-dependentmeasures, and Generalized semi-infinite programming require extensions to bifunctions for whichthe second variable’s domain depend on the first variable. For example, in a minsup problem1

this corresponds to the situation when the inner maximization has a feasible region that dependson the outer minimization variable, in a Generalized Nash game, the need arises when the set offeasible actions for any given agent depends on the actions of the other agents.

We extend the definition of lopsided convergence to deal with these situations and establish anarray of results addressing this wider setting. Specifically, we show that for this extended notionof lopsided convergence, optimal solutions and optimal values of approximating minsup problemstend to those of an original minsup problem. We also relax the definition of lopsided conver-gence for bifunctions defined on a product space and, therefore, broaden the area of applicationeven in this classical situation. In the process, we recast, and in a couple of instances refine,the fundamental implications of epi-convergence in the present framework, i.e., for finite-valuedfunctions defined on a subset of a metric space.

For the first time we quantify lopsided convergence by defining the lop-distance. The lop-distancebetween two bifunctions is given in terms of the Attouch-Wets distance [5] between the sup-projections of the bifunctions with respect to the second variable. Thus, we place firmly theemphasis on the outer minimization in a minsup problem at the expense of the inner maximiza-tion. This imbalance indeed motivated the terminology “lopsided.” In the context of minsupproblems, Generalized Nash game, and many other situations, this perspective is reasonable asthe inner problem is certainly secondary as illustrated below; the solution of the outer mini-mization being primary. For example, this leads to estimates of the rate of convergence of thatouter solution as demonstrated in [29]. We note that the point of view differs from that ofepi/hypo-convergence [4] and analysis of the convex/concave case [25, Chapters 33-37]. Therethe focus is on finding saddle-point pairs, which implies a certain balance between the inner and

1We prefer “minsup problem” to “minimax problem” as the inner maximization may not be attained in muchof our developoment.

2

Page 3: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

outer problems and indeed symmetry in the convex/concave case. Of course, our new viewpointremains applicable in the convex/concave case and it yields somewhat sharper results not dis-cussed here.

The article proceeds in §2 with a couple of motivating examples. In §3, we give the new def-inition, provide sufficiency conditions, and also discuss foundations related to epi-convergence.Consequences of lopsided convergence, with illustrations from Generalized Nash games, are es-tablished in §4 and the lop-distance is introduced in §5.

Throughout, we let (X, dX) and (Y, dY ) be two metric spaces and consider finite-valued bifunc-tions defined on nonempty subsets of X × Y .

2 Motivation

In contrast to this article predecessors’, e.g., [28], that deal with bifunctions of the form F :C × D ⊂ X × Y → IR with D a (fixed) subset of Y , we consider bifunctions with arbitrarydomains. That is, the set D ⊂ Y of permissible values for the second variable might depend onx ∈ X, the first variable. This extension is essential to deal with two main application areas asillustrated next, but also a wide range of generalizations of models in [21].

2.1 Optimization under Stochastic Ambiguity

Consider the minsup problemminx∈C supy∈D(x) F (x, y),

where D : C →→ Y is a set-valued mapping, C = domD ={x : D(x) = ∅

}⊂ X, and Y is the

space of probability distributions defined on IRm, with an appropriate metric. It identifies anoptimization model with stochastic ambiguity where C is a collection of feasible decisions, D(x)an ambiguity set of probability distributions for every x ∈ C, and F a bifunction that dependsboth on the decision and the distribution. For example, F (x, P ) = IEP [φ(x, ξ)], where ξ is arandom vector with distribution function P and the expectation is therefore taken with respect toa distribution function that is determined by the inner maximization problem. In applications, itis sometimes crucial to allow the ambiguity set to depend in a nontrivial manner on the decisionx to capture situations where the decision maker affects the uncertainty as modeled here viathe set D(x); see [33, 29] as well as [30, 11, 13, 31] for related models. In [29], we leverage theresults of the present paper to establish convergence of solutions of approximating optimizationproblems on IRn under stochastic ambiguity to those of an actual problem and also illustrate theuse of the lop-distance defined in §5 for that context.

3

Page 4: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

2.2 Generalized Nash Games

As we shall see, the study of Generalized Nash games naturally leads to the study of bifunctionsthat are defined on a (proper) subset of a product space. An equilibrium of a Generalized Nashgame with a finite set A of agents, is a solution x = (xa, a ∈ A) that satisfies

xa ∈ argminxa∈Da(x−a) ca(xa, x−a), for all a ∈ A,

where x−a = (xa′ : a′ ∈ A\{a}), ca is the cost function for agent a, and Da(x−a) is the set of

available strategies for agent a, which depends on the choices of strategies by the other agents.

The idea of using bifunctions to characterize equilibria of such games goes back at least to [24];see also the review [14]. One approach leverages the Nikaido-Isoda bifunction, which is given by

F (x, y) =∑

a∈A

[ca(xa, x−a)− ca(ya, x−a)

]for x ∈ C, y ∈ D(x)

withC =

{x : xa ∈ Da(x−a) for all a ∈ A

}and D(x) =

∏a∈A

Da(x−a).

Clearly, the (effective) domain of this bifunction might be rather involved.

To align with the notation elsewhere, we think of Da(x−a) as a subset of a metric space Xa

of conceivable strategies for agent a and the underlying metric space for the first variable inthe bifunction is X =

∏a∈AXa, equipped with the product metric. Thus, C ⊂ X. Certainly,

Da : X−a →→ Xa, where X−a =∏

a′∈A\a Xa′ , but the mapping D can be restricted to C as otherpoints are irrelevant, i.e., D : C →→ X. We therefore have that the other underlying metric spacefor the second variable in the bifunction Y = X in this case. Moreover, ca : Xa ×X−a → IR. Inour notation, highlighting the role of minsup-points, we then obtain the following characterization(see, e.g., [14]).

2.1 Proposition (characterization of equilibrium) In the notation of this subsection, x is anequilibrium if and only if it is a minsup-point of F with nonpositive minsup-value, i.e.,

x ∈ argminx∈C supy∈D(x) F (x, y) and supy∈D(x) F (x, y) ≤ 0.

Proof. If x is a Nash equilibrium, then

ca(xa, x−a) ≤ ca(ya, x−a) for all ya ∈ Da(x−a), a ∈ A.

Thus, F (x, y) ≤ 0 for all y ∈ D(x). For any x ∈ C, x ∈ D(x) and therefore supy∈D(x) F (x, y) ≥ 0.In particular,

supy∈D(x) F (x, y) = F (x, x) = 0.

4

Page 5: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

Consequently, x ∈ argminx∈C supy∈D(x) F (x, y). For the converse, let x be a minsup-point of Fand supy∈D(x) F (x, y) ≤ 0. Then,

0 ≥ supy∈D(x) F (x, y) =∑

a∈A

[ca(xa, x−a)− infya∈Da(x−a) ca(ya, x−a)

].

The upper bound of zero and the fact that xa ∈ Da(x−a) for all a ∈ A imply that each term inthe sum must be zero and the conclusion follows.

In §4.3, we show how stability and approximation of the solutions of Generalized Nash games canbe examined through the lense of lopsided convergence. The results are widely applicable as thestrategy spaces of the agents are only required to be metric spaces and an agent’s constraint setmay neither be convex nor compact and could very well depend on the other agents’ choices ofstrategies – a centerpiece of the new definition of lopsided convergence. Thus, we extend resultsin [18, 21] that both consider finite-dimensional strategies and constraint sets independent ofother agents’ choices of strategies.

3 Lopsided Convergence

As already mentioned in the Introduction, to encompass the family of applications sketched outin §2 and others, we need to extend the definition of lopsided convergence, and the resultingtheory, to a larger class of bifunctions than those considered in earlier work. The domain,domF = ∅, of a finite-valued bifunction F will no longer be restricted to a product subset ofX × Y but could be any subset. The family of all such bifunctions is denoted

bfcns(X, Y ) :={F : domF → IR : ∅ = domF ⊂ X × Y

}.

It is on this family we introduce the new definition of lopsided convergence.

3.1 Definition

The (first) x-variable of a bifunction takes a primary role in our development leading us to thefollowing description of the domain of a bifunction. We associate with a bifunction F , the set

C :={x ∈ X : ∃ y ∈ Y such that (x, y) ∈ domF

}and the set-valued mapping D : C →→ Y such that

D(x) :={y ∈ Y : (x, y) ∈ domF

}for x ∈ C.

Thus, domF ={(x, y) ∈ X×Y : x ∈ C, y ∈ D(x)

}, with domD =

{x ∈ X : D(x) = ∅

}= C.

When domF is a product set it agrees with having D a constant mapping; the output of thatmapping is then also denoted by D (instead of D(x)). Figure 1 illustrates the case with a productset (left portion) and the general case (right portion). Throughout, domF is described in terms

5

Page 6: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

(·)

( )

dom

dom

Figure 1: Domains of bifunctions: product set (left) and general (right).

of such C and D.

In applications, a bifunction of interest might be defined on a “large” subset of X × Y , possiblyeverywhere, but the context requires restrictions to some “smaller” subset for example dictatedby constraints imposed on the variables. If these constraints are that x ∈ C ⊂ X and y ∈ D(x)for some D : C →→ Y , then F in our notation would become the original bifunction restrictedto the set {(x, y) ∈ X × Y : x ∈ C, y ∈ D(x)}, which becomes domF . In other applications,a bifunction F might be the only problem data given. These differences are immaterial to thefollowing development as both are captured by considering bfcns(X,Y ).

With IN := {1, 2, 3, ...}, collections of bifunctions are often denoted by {F ν , ν ∈ IN} ⊂ bfcns(X, Y ).Analogously to the notation above, domF ν is described by a set Cν ⊂ X and a set-valued map-ping Dν : Cν →→ Y such that domF ν =

{(x, y) ∈ X × Y : x ∈ Cν , y ∈ Dν(x)

}. If not specified

otherwise, the index ν runs over IN so that xν → x means that the whole sequence {xν , ν ∈ IN}converges to x. Let

N#

∞ be all the subsets of IN determined by subsequences,

i.e., N ∈ N#∞ is an infinite collection of strictly increasing natural numbers. Thus, {xν , ν ∈ N}

is a subsequence of {xν , ν ∈ IN}; its convergence to x is noted by xν →N x.

The extended definition of lopsided convergence takes the following form.

3.1 Definition (lopsided convergence) Let {F, F ν , ν ∈ IN} ⊂ bfcns(X,Y ) with domains de-scribed by (C,D) and (Cν , Dν), respectively. Then, F ν converges lopsided, or lop-converges, toF , written F ν −→lop F , when

(a) ∀N ∈ N#∞, xν ∈ Cν →N x ∈ C, and y ∈ D(x), ∃ yν ∈ Dν(xν)→N y such that

liminfν∈N F ν(xν , yν) ≥ F (x, y) and∀N ∈ N#

∞ and xν ∈ Cν →N x /∈ C, ∃ yν ∈ Dν(xν) such that F ν(xν , yν)→N ∞;

6

Page 7: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

(b) ∀x ∈ C, ∃xν ∈ Cν → x such that ∀N ∈ N#∞ and yν ∈ Dν(xν)→N y ∈ Y ,

limsupν∈N F ν(xν , yν) ≤ F (x, y) if y ∈ D(x) and F ν(xν , yν)→N −∞ otherwise.

Lop-convergence does not have a direct geometric interpretation. However, as discussed in §4,it is intimately tied to epi- and hypo-convergence, which are easily understood in terms of theconvergence of epigraphs and hypographs; see §3.4. We can therefore understand, in part, lop-convergence through these geometric interpretations. A preview of the conclusions reached in§4 builds intuition at this stage: If the bifunctions F ν and F do not depend on y, then lop-convergence “collapses” to epi-convergence. Under mild assumptions, F ν −→lop F implies thatsupy∈Dν(·) F

ν(·, y) epi-converges to supy∈D(·) F (·, y). We also have that at every x ∈ C, F ν(xν , ·)hypo-converges to F (x, ·) for some xν → x.

We note that even in the case when domF and domF ν are product sets for all ν, our definitionrepresents a relaxation of the requirements in prior definitions as {yν , ν ∈ IN} in condition (a)of Definition 3.1 for the case x ∈ C is not required to converge to y.

Contrast with earlier definition for product sets. Suppose that {F, F ν , ν ∈ IN} ⊂bfcns(X, Y ) have domF = C × D and domF ν = Cν × Dν for sets D,Dν ⊂ Y . In [20, 28],lop-convergence is defined to take place when2 (a’) ∀N ∈ N#

∞, xν ∈ Cν →N x ∈ X, and y ∈ D,∃ yν ∈ Dν →N y such that liminfν∈N F ν(xν , yν) ≥ F (x, y) if x ∈ C and F ν(xν , yν)→N ∞ oth-erwise; and (b’) ∀x ∈ C, ∃xν ∈ Cν → x such that ∀N ∈ N#

∞ and yν ∈ Dν → y ∈ Y ,limsupν∈N F ν(xν , yν) ≤ F (x, y) if y ∈ D and F ν(xν , yν)→N −∞ otherwise. The condition (b’)is exactly condition (b) of Definition 3.1 for the case of product sets. However, condition (a’) isstronger than condition (a) of Definition 3.1 as illustrated by the following trivial example. LetX = Y = IR, Cν = C = (0, 1], Dν = D = [0, 1], and F ν(x, y) = F (x, y) = 1/(x + y) for (x, y)in the their domains. It is clear that (a’) fails for y = 1, x = 0, and xν = 1/ν as there is noyν → y such that F ν(xν , yν) → ∞. However, condition (a) of Definition 3.1 holds as one cantake yν = 1/ν in the case x ∈ C. Then, F ν(xν , yν) = ν/2 → ∞. If x ∈ C, then one can takeyν = y and obtain that F ν(xν , yν) = F (xν , y) → F (x, y) by continuity of F . For condition (b)of Definition 3.1, one can take xν = x and only be concerned with y ∈ Y due to the closednessof Y . Continuity of F then allows us to conclude that F ν −→lop F . In this case, with F ν = F ,convergence is indeed “natural” and Definition 3.1 addresses this situation.

3.2 About Sufficiency

One can come up with a wide collection of sufficient conditions for lop-convergence in termsof the way the “components” of a sequence of bifunctions F ν converge to those of the limitingbifunction F ; note that it will be necessary to broaden convergence notions for functions andmappings to take into account the fact that the domains of these bifunctions are generally notidentical. The following results are only meant to illustrate the possibilities and might not be

2The need to check subsequences N ∈ N#∞ is understood, but not explicitly stated in these references.

7

Page 8: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

as taut as possible and certainly not necessary. The more interesting ones come from specificapplications such as those laid out in [21, 12] involving bifunctions whose domains are productsets and the more general family, when the domains are not restricted to product sets, such asthe examples in §4.3 and those described in [29].

In the following, convergence of sequences in X × Y are always in the sense of the producttopology, i.e., (xν , yν) ⊂ X×Y → (x, y) ∈ X×Y if max{dX(xν , x), dY (y

ν , y)} → 0. Convergenceof sets are always in the sense of Painleve-Kuratowski. Specifically, in a metric space, the outerlimit of a sequence of sets {Aν , ν ∈ IN}, denoted by OutLimAν , is the collection of points x towhich a subsequence of {xν ∈ Aν , ν ∈ IN} converges. The inner limit, denote by InnLimAν ,is the points to which a sequence of {xν ∈ Aν , ν ∈ IN} converges. If both limits exist and areidentical to A, we say that Aν (set-)converges to A, which is denoted by Aν → A; see [9, 26].

3.2 Theorem (sufficiency when C = Cν) For bifunctions{F, F ν , ν ∈ IN

}⊂ bfcns(X, Y ) with

domains described by (C,D) and (Cν , Dν), respectively, F ν −→lop F when

(a) C = Cν , ν ∈ IN , are closed;

(b) the mappings Dν continuously converge to D, relative to C, i.e., ∀xν ∈ C → x ∈ C,Dν(xν) → D(x); and

(c) the bifunctions F ν continuously converge to F , relative to their domains, i.e., ∀(xν , yν) ∈domF ν → (x, y) ∈ domF , F ν(xν , yν) → F (x, y).

Proof. Since C is closed, given any xν ∈ C → x it always entails x ∈ C. Moreover, continuousconvergence of the mappings Dν to D, relative to C, implies Dν(xν) → D(x) which, in turn, im-plies that for any y ∈ D(x) one can find yν ∈ Dν(xν) converging to y. From (c), the continuousconvergence of the bifunctions F ν to F , implies F (xν , yν) → F (x, y) which immediately yieldscondition (a) of Definition 3.1.

To verify condition (b) of Definition 3.1, given any x ∈ C, by choosing the sequence {xν = x, ν ∈IN}, Dν(x) → D(x) follows from continuous convergence of the mappings. Thus, wheneveryν ∈ Dν(x) → y, y ∈ D(x). In turn, this means that we only have to check if limsupF ν(x, yν) ≤F (x, y) which, of course, is satisfied since F ν(x, yν) → F (x, y) in view of assumption (c).

Next, we deal with the situation when the domains of the bifunctions are product sets. Infact, the next statement can be viewed as a refinement of the earlier theorem. We recall that afunction f : C → IR, with C ⊂ X and X any metric space, is lower semicontinuous (lsc) when,for all xν ∈ C → x ∈ X, liminf f(xν) ≥ f(x) if x ∈ C and f(xν) → ∞ otherwise.

3.3 Proposition (sufficiency under product sets) Let {F : C × D → IR, F ν : C × Dν → IR,ν ∈ IN} ⊂ bfcns(X,Y ) with C closed and Dν ⊂ Y → D ⊂ Y . If in terms of some lsc bifunction

F : X × Y → IR,F = F on C ×D and F ν = F on C ×Dν ,

then F ν −→lop F , provided that, for all x ∈ C, F (x, ·) is usc.

8

Page 9: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

Proof. First, consider condition (a) of Definition 3.1, which now simplifies to finding, for every

y ∈ D and xν ∈ C → x ∈ C, a sequence yν ∈ Dν → y such that liminf F (xν , yν) ≥ F (x, y). This

condition follows from the set-convergence of Dν to D and lower semicontinuity of F . Second,for condition (b) of Definition 3.1, we select xν = x ∈ C for all ν. Since Dν → D, any sequence

yν ∈ Dν → y, implies y ∈ D. Thus, the condition simplifies to limsup F (x, yν) ≤ F (x, y) whichholds in view of the last assumption of the proposition.

Finally, we record a result for the case when there are approximations in the set controlling the(first) x-variable, which may occur, for example, in sensitivity analysis, semi-infinite optimiza-tion, stochastic programming, and problems with an infinite-dimensional space X that all maytrigger approximations.

3.4 Proposition (sufficiency when C = Cν) For bifunctions {F, F ν , ν ∈ IN} ⊂ bfcns(X, Y )with domains described by (C,D) and (Cν , Dν), respectively, F ν −→lop F when

(a) Cν → C;

(b) for some continuous set-valued mapping D : X →→ Y ,

D = D on C and Dν = D on Cν ;

(c) for some continuous bifunction F : X × Y → IR,

F = F on domF and F ν = F on domF ν .

Proof. Since Cν → C, every xν ∈ Cν → x must have x ∈ C. Thus, condition (a) of Definition

3.1 simplifies to finding, for every xν ∈ Cν → x ∈ C and y ∈ D(x), a sequence yν ∈ D(xν) → y

with liminf F (xν , yν) ≥ F (x, y). Since D is continuous, such a sequence exists and the inequality

therefore follows from the continuity of F . We next turn to condition (b) of Definition 3.1. Since

D is continuous, yν ∈ D(xν) → y implies that y ∈ D(x) whenever xν → x. Thus, the condition

simplifies to finding, for every x ∈ C, a sequence xν ∈ Cν → x such that limsup F (xν , yν) ≤F (x, y) for all yν ∈ D(xν) → y ∈ D(x). Since Cν → C, there is certainly a sequence xν ∈ Cν → x

for all x ∈ C. The inequality then holds in view of the continuity of F .

It is easy to find generalizations of the preceding results, for example rather than requiring con-tinuous convergence of the F ν in Theorem 3.2 one could be satisfied with some “semicontinuous”convergence complemented with a pointwise upper semicontinuity condition. At this point, weshall not get involved in all the possibilities as eventually one is bound to be mostly interestedin conditions that apply in specific applications. However, we caution that certain “natural”conditions are not sufficient as exemplified next.

Failure of lop-convergence under graphical convergence. Consider the following situ-ation where F ν = F for all ν ∈ IN , with domains described by C = Cν = IR and Dν(x) =

9

Page 10: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

D(x) = [−1, 1] if x ≤ 0, and {0} otherwise. Certainly the mappings Dν graphically convergeto D since they are identical. However, when considering condition (a) of Definition 3.1 withxν > 0 → x = 0 and y = 1/2 ∈ D(x), there are no yν ∈ Dν(xν) → y and lop-convergencefails. We note that in this case pointwise convergence Dν(x) → D(x) holds for all x ∈ IRand, consequently, the mappings are equi-osc [26, Theorem 5.40]. We next give a more involvedexample where pointwise convergence again holds, but now for problems with different solutions.

Failure of lop-convergence under pointwise set-convergence. Suppose that {F, F ν , ν ∈IN} ⊂ bfcns(IR, IR) with domains described by C = Cν = [0, 1], D(x) = Dν(x) = {0} forx ∈ [0, 1), Dν(1) = [0, 1 + 1/ν], and D(1) = [0, 1]. Moreover, let F (x, y) = F ν(x, y) = 0 ifx ∈ [0, 1), and F (1, y) = F ν(1, y) = −2 + y if y ≤ 1 and F ν(1, y) = 1 if y > 1. Clearly,for every x ∈ [0, 1], Dν(x) → D(x). However, lop-convergence of F ν to F fails as for x = 1,xν = 1 − 1/ν, and y = 1, there exists no sequence {yν , ν ∈ IN}, with yν ∈ Dν(xν) = {0} thatconverges to y as required by condition (a) of Definition 3.1. Here, supy∈Dν(x) F

ν(x, y) = 0 ifx ∈ [0, 1) and supy∈Dν(x) F

ν(x, y) = 1 if x = 1, and supy∈D(x) F (x, y) = 0 if x ∈ [0, 1) andsupy∈D(x) F (1, y) = −1. Thus, the optimal value of minx∈Cν supy∈Dν(x) F

ν(x, y), which is 0, doesnot converge to the optimal value of minx∈C supy∈D(x) F (x, y), which is −1. Since a main purposeof a notion of variational convergence of bifunctions is to ensure convergence of such optimalvalues, it is clear that pointwise set-convergence is not strong enough.

3.3 Tightness

A slight strengthening of lop-convergence that amounts to a relaxed compactness assumptionbecomes beneficial in §4 when deriving consequences.

3.5 Definition (ancillary-tight lop-convergence) The lop-convergence of {F ν , ν ∈ IN} ⊂ bfcns(X,Y )to F ∈ bfcns(X, Y ) is ancilliary-tight when for every ε > 0 and sequence xν → x selected incondition (b) of Definition 3.1, there exists a compact set Bε ⊂ Y and an integer νε such that

supy∈Dν(xν)∩Bε

F ν(xν , y) ≥ supy∈Dν(xν)

F ν(xν , y)− ε for all ν ≥ νε,

where Dν describes domF ν .

As usual, we interpret the supremum over an empty subset of IR as −∞. The added requirementfor ancillary-tightness is satisfied if all Dν(xν) are contained in a compact set, but many otherpossibilities exist. For example, in [29] ancillary-tightness is connected to “tightness” in the senseof probability theory when Y is a space of distribution functions with a metric corresponding toconvergence in distribution. Applications in optimization under stochastic ambiguity can be ofthis form; see §2.1. In this case, if {Dν(xν), ν ∈ IN} is tight in the sense of probability theory forevery sequence xν → x selected in condition (b) of Definition 3.1, then lop-convergence impliesancillary-tight lop-convergence, i.e., ancillary-tightness is “automatic.”

10

Page 11: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

If ancillary-tightness is combined with a similar condition for the outer minimization, we obtaina further strengthening of the notion.

3.6 Definition (tight lop-convergence) The ancillary-tight lop-convergence of bifunctions {F ν ,ν ∈ IN} ⊂ bfcns(X,Y ) to F ∈ bfcns(X, Y ) is tight when for any ε > 0 one can find a compactset Aε ⊂ X and an integer νε such that

infx∈Cν∩Aε supy∈Dν(x) Fν(x, y) ≤ infx∈Cν supy∈Dν(x) F

ν(x, y) + ε for all ν ≥ νε,

where (Cν , Dν) describes domF ν .

The infimum of an empty subset of IR is interpreted as ∞. This further strengthening of therequirements would be satisfied if all Cν are contained in a compact set, but this is certainly nota necessity.

3.4 Epi- and Hypo-Convergence: A Summary

Before we develop consequences of lop-convergence, we give some background facts about epi-and hypo-convergence for (univariate) functions; see [1, 8, 26] for comprehensive treatments.We present results for real-valued functions defined on (nonempty) subsets of (X, dX); in manyways, this is just a variant of the more traditional framework that considers extended real-valuedfunctions, cf. [8, 26], but slight sharpening of some results become possible. Our focus will, thus,be on

fcns(X) ={f : C → IR : for some ∅ = C ⊂ X

}.

In this section, for functions f, f ν ∈ fcns(X), we denote by C and Cν their domains, respectively.The epigraph of f , epi f ⊂ X × IR, consists of all points that lie on or above the graph of f ; itis lsc if epi f is a closed subset of X × IR (with respect to the product topology generated bydX and the usual metric on IR) and, provided X is a linear space3, it is convex if its epigraphis convex. The hypograph of f , hypo f ⊂ X × IR, consists of all points that lie on or below thegraph of f ; it is upper semicontinuous (usc) if hypo f is closed and, provided X is a linear space,it is concave if its hypograph is convex.

A sequence of functions {f ν , ν ∈ IN} ⊂ fcns(X) epi-converges to a function f ∈ fcns(X), writtenf ν →e f , when the epigraphs epi f ν set-converge to epi f ; similarly, they hypo-converge, writtenf ν →h f , if the hypographs hypo f ν set-converge to hypo f . Equivalently, epi-convergence canalso be defined as follows:

3.7 Definition (epi- and hypo-convergence) For {f, f ν , ν ∈ IN} ⊂ fcns(X) with domains C andCν , respectively, we have that f ν →e f if and only if

(a) ∀N ∈ N#∞ and xν ∈ Cν →N x, liminfν∈N f ν(xν) ≥ f(x) if x ∈ C and f ν(xν)→N ∞ otherwise,

3Statements about convexity/concavity are the only ones that require a linear space in this paper.

11

Page 12: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

(b) ∀x ∈ C, ∃ xν ∈ Cν → x such that limsup f ν(xν) ≤ f(x).

The functions f ν are said to epi-converge tightly to f when f ν →e f and for all ε > 0, one canfind a compact set Bε ⊂ X and an index νε such that

∀ ν ≥ νε : infx∈Cν∩Bε fν(x) ≤ infx∈Cν f ν(x) + ε.

Moreover, f ν →h f if and only if −f ν →e − f and they hypo-converge tightly if the functions −f ν

epi-converge tightly to −f .

As follows immediately from the properties of set-limits, an epi-limit is always lsc and, providedthat X is linear, it is convex whenever the functions f ν are convex. Moreover, a hypo-limit isalways usc and, provided that X is linear, it is concave whenever the functions f ν are concave.The topology induced by epi-convergence is metrizable, a property we leverage in §5.

For f ∈ fcns(X) with domain C, optimal values are denoted by

inf f := inf{f(x) : x ∈ C} and sup f := sup{f(x) : x ∈ C},

and, with ε ≥ 0, (near-)optimal solutions by

ε- argmin f := {x ∈ C : f(x) ≤ inf f + ε} and ε- argmax f := {x ∈ C : f(x) ≥ sup f − ε}.

Since C is nonempty because of f ∈ fcns(X), inf f < ∞. Moreover, inf f = −∞ implies thatargmin f = ∅. Convergence of optimal solutions and optimal values are summarized in the nexttheorem. The result and proof are mostly the same as those of [26, Theorem 7.31] and [20,Theorems 2.5 and 2.8], which consider X = IRn, but stated here for completeness with someclarification and improvements, especially regarding the role of finiteness of inf f and convergenceof near-optimal solutions. We refer to [4] for early results of this kind.

3.8 Theorem (epi- and hypo-convergence: basic properties) Consider {f, f ν , ν ∈ IN} ⊂fcns(X). If f ν →e f , then the following hold:

(a) limsup(inf f ν

)≤ inf f and ∀{εν ↓ 0, ν ∈ IN}, OutLim

(εν-argmin f ν

)⊂ argmin f .

(b) If {xν ∈ argmin f ν , ν ∈ N} converges for some N ∈ N#∞, then limν∈N(inf f

ν) = inf f .

(c) inf f ν → inf f > −∞ ⇐⇒ f ν →e f tightly.

(d) inf f ν → inf f and ε > 0 =⇒ InnLim(ε- argmin f ν) ⊃ argmin f .

(e) inf f ν → inf f andX is separable4 =⇒ ∃{εν ↓ 0, ν ∈ IN} such that εν-argmin f ν → argmin f .

(f) ∃{εν ↓ 0, ν ∈ IN} such that εν-argmin f ν → argmin f = ∅ =⇒ inf f ν → inf f > −∞.

4We deduce from a counterexample in [10] that the separability assumption cannot be relaxed.

12

Page 13: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

If f ν →h f , then liminfν(sup fν) ≥ sup f and (a)-(f) hold with min/inf replaced by max/sup,

> −∞ by < ∞, and tight epi-convergence by tight hypo-convergence.

Proof. Let C and Cν be the domains of f and f ν , respectively. For part (a), we first suppose thatinf f is finite and let ε > 0. There exists x ∈ C such that f(x) ≤ inf f + ε and also, by condition(b) of Definition 3.7, xν ∈ Cν → x such that limsup f ν(xν) ≤ f(x). Thus, limsup(inf f ν) ≤limsup f ν(xν) ≤ f(x) ≤ inf f + ε. Second, suppose that inf f = −∞ and let δ > 0. Then, thereexists x ∈ C such that f(x) ≤ −δ and also, by condition (b) of Definition 3.7, xν ∈ Cν → xsuch that limsup f ν(xν) ≤ f(x). Thus, limsup(inf f ν) ≤ limsup f ν(xν) ≤ f(x) ≤ −δ. Since εand δ are arbitrary, the first result of part (a) is established. For the second result, suppose thatx ∈ OutLim(εν-argmin f ν). Then, there exist N ∈ N#

∞ and xν ∈ εν-argmin f ν →N x. Thus,

limsupν∈N f ν(xν) ≤ limsupν∈N(inf fν + εν) ≤ inf f < ∞.

In view of condition (a) of Definition 3.7, this implies that x ∈ C and also

f(x) ≤ liminfν∈N f ν(xν) ≤ limsupν∈N f ν(xν) ≤ inf f.

Hence, x ∈ argmin f and the proof of part (a) is complete.

For part (b), let x be the limit of {xν ∈ argmin f ν , ν ∈ N}. In view of part (a), x ∈ argmin fand also limsup(inf f ν) ≤ inf f . Condition (a) of Definition 3.7 implies that liminfν∈N(inf f

ν) =liminfν∈N f ν(xν) ≥ f(x) = inf f and the conclusion holds.

Part (c), necessity. Suppose that inf f ν → inf f > −∞ and let ε > 0. Then, there exist ν1 ∈ INsuch that inf f ≤ inf f ν + ε/3 for all ν ≥ ν1 and also x ∈ C such that f(x) ≤ inf f + ε/3.In view of condition (b) of Definition 3.7, there exist xν ∈ Cν → x and ν2 ≥ ν1 such thatf ν(xν) ≤ f(x) + ε/3 for all ν ≥ ν2. Let B ⊂ X be a compact set containing {xν , ν ∈ IN}. Thus,for ν ≥ ν2,

infx∈Cν∩B f ν(x) ≤ f ν(xν) ≤ f(x) + ε/3 ≤ inf f + 2ε/3 ≤ inf f ν + ε.

Part (c), sufficiency. For the sake of a contradiction let inf f = −∞. Then, (a) implies thatinf f ν → −∞. Since the epi-convergence is tight, there exists a compact set B ⊂ X suchthat infx∈Cν∩B f ν(x) → −∞ and therefore also a sequence {xν ∈ Cν ∩ B, ν ∈ IN} such thatf ν(xν) → −∞. The compactness of B implies that for some N ∈ N#

∞ and x ∈ B, limν∈N xν = x.In view of condition (a) of Definition 3.7, liminfν∈N f ν(xν) ≥ f(x) ∈ IR if x ∈ C and f ν(xν)→N ∞otherwise. However, both cases contradict f ν(xν) → −∞ and thus inf f > −∞. We next showthat liminf(inf f ν) ≥ inf f , which, together with (a) completes the proof of (c). We start byshowing that for any compact set B ⊂ X, {infx∈Cν∩B f ν(x), ν ∈ IN}, except possibly for afinite number of indexes, is bounded away from −∞. For the sake of a contradiction, supposethat for N ∈ N#

∞ we have infx∈Cν∩B f ν(x) < inf f − 1 for all ν ∈ N . Then, there exists{xν ∈ Cν ∩B, ν ∈ N} such that f ν(xν) < inf f − 1 for all ν ∈ N . Since B is compact, there is a

13

Page 14: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

cluster point x ∈ B of {xν , ν ∈ N}. By condition (a) of Definition 3.7 and the boundedness of{f ν(xν), ν ∈ N}, x ∈ C. Then, the same condition gives that liminfν∈N f ν(xν) ≥ f(x) ≥ inf f ,which is a contradiction. Hence, {infx∈Cν∩B f ν(x), ν ∈ IN}, except possibly for a finite numberof indexes, is bounded away from −∞. Next, for ε > 0, let the compact set Bε ⊂ X and νε besuch that infx∈Cν∩Bε f

ν(x) ≤ inf f ν + ε for all ν ≥ νε, which holds by the tightness assumption.Since {infx∈Cν∩B f ν(x), ν ∈ IN} is eventually bounded away from −∞, there exist νε ≥ νε andxν ∈ Cν ∩Bε such that f ν(xν) ≤ infx∈Cν∩Bε f

ν(x) + ε for all ν ≥ νε. Thus, for ν ≥ νε,

f ν(xν) ≤ infx∈Cν∩Bε fν(x) + ε ≤ inf f ν + 2ε.

In view of (a), we conclude that {f ν(xν), ν ∈ IN} is bounded from above. The compactness of Bε

implies that there exist N ∈ N#∞ and x ∈ Bε such that limν∈N xν = x. In view of condition (a)

of Definition 3.7, x ∈ C because {f ν(xν), ν ∈ N} is bounded from above. The same conditionthen implies that

liminfν∈N(inf fν) + 2ε ≥ liminfν∈N(infx∈Cν∩Bε f

ν(x)) + ε ≥ liminfν∈N f ν(xν) ≥ f(x) ≥ inf f.

Since this argument holds not only for {f ν , ν ∈ IN} but also for all subsequences, liminf(inf f ν)+2ε ≥ inf f . We reach the conclusion of part (c) after recognizing that ε is arbitrary.

For part (d), let x ∈ argmin f . Condition (b) of Definition 3.7 implies that there exist xν ∈Cν → x and ν1 ∈ IN such that f ν(xν) ≤ f(x) + ε/2 for all ν ≥ ν1. Since inf f is finite, there isalso ν2 ≥ ν1 such that inf f ≤ inf f ν + ε/2 for all ν ≥ ν2. Thus,

f ν(xν) ≤ f(x) + ε/2 = inf f + ε/2 ≤ inf f ν + ε

for all ν ≥ ν2 and we conclude that x ∈ InnLim(ε- argmin f ν).

Part (e). From (a), OutLim(εν-argmin f ν) ⊂ argmin f for any εν ↓ 0. Thus it suffices to showthat InnLim(εν-argmin f ν) ⊃ argmin f for some εν ↓ 0. If argmin f = ∅, then the inclusionis automatic. Thus, we can assume that argmin f = ∅ and inf f is finite. Theorem 3.1 in[10] states the following result: For any collection {h, hν , ν ∈ IN} of extended real-valued lscfunctions on a separable metric space, if hν →e h and α ∈ IR, then there exists αν → α suchthat {x : hν(x) ≤ αν} → {x : h(x) ≤ α}. To apply this theorem here, let lsc f ν be the lsc-regularization of f ν on X, i.e., the highest lsc function on X not exceeding f ν extended to thewhole of X by assigning it the value ∞ outside its domain. Since lsc f ν →e f , which already islsc and can be extended in the same manner, it follows that there exists αν → inf f such that

{x ∈ X : (lsc f ν)(x) ≤ αν} → {x ∈ X : f(x) ≤ inf f} = argmin f.

Set {εν = 1/ν + max[0, αν − inf f ν ], ν ∈ IN} and x ∈ argmin f . In view of the above setconvergence, there exists a sequence {xν , ν ∈ IN}, with (lsc f ν)(xν) ≤ αν , converging to x. The

14

Page 15: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

definition of lsc-regularization implies that there exists {yν ∈ Cν , ν ∈ IN} such that dX(xν , yν) ≤

1/ν and f ν(yν) ≤ (lsc f ν)(xν) + 1/ν. Thus,

f ν(yν) ≤ (lsc f ν)(xν) + 1/ν ≤ αν + 1/ν ≤ εν + inf f ν

and therefore yν ∈ εν-argmin f ν . Consequently, x ∈ InnLim(εν-argmin f ν), which implies thatInnLim(εν-argmin f ν) ⊃ argmin f . Since inf f ν → inf f , εν ↓ 0 and the conclusion holds.

For part (f), let x ∈ argmin f . The assumption permits the selection of ν ∈ IN and a sequence{xν ∈ εν-argmin f ν , ν ≥ ν} → x. Thus, by condition (a) of Definition 3.7,

inf f = f(x) ≤ liminf f ν(xν) ≤ liminf(inf f ν + εν) = liminf(inf f ν).

In view of part (a), the conclusion follows.

4 Consequences of Lopsided Convergence

The main consequence of lop-convergence of bifunctions F ν to F is the convergence of solutionsof the

approximating minsup problems minx∈Cν supy∈Dν(x) Fν(x, y)

to those of anactual minsup problem minx∈C supy∈D(x) F (x, y),

where (C,D) and (Cν , Dν) describe domF and domF ν , respectively. We give detailed resultsbelow, but start with fundamental properties associated with lopsided convergence.

4.1 Basic Properties

For “degenerate” bifunctions that only depend on their first argument, lopsided convergence isequivalent to epi-convergence, a direct consequence of the definitions, as stated next.

4.1 Proposition (lop-convergence subsumes epi-convergence) Suppose that the bifunctions {F,F ν , ν ∈ IN} ⊂ bfcns(X, Y ) has domF = C × Y , domF ν = Cν × Y , F (x, y) = F (x, y′) for allx ∈ C and y, y′ ∈ Y , and F ν(x, y) = F ν(x, y′) for all x ∈ Cν and y, y′ ∈ Y . Then, for any y ∈ Y ,

F ν lop-converges to F ⇐⇒ F ν(·, y) : Cν → IR epi-converges to F (·, y) : C → IR.

Lop-convergence implies hypo-convergence for certain functions as seen next.

4.2 Proposition (hypo-convergence of slices) For {F, F ν , ν ∈ IN} ⊂ bfcns(X, Y ) with domainsdescribed by (C,D) and (Cν , Dν), respectively, we have that F ν −→lop F implies that for all x ∈ C,there exists xν ∈ Cν → x such that the functions F ν(xν , ·) : Dν(xν) → IR hypo-converge toF (x, ·) : D(x) → IR.

15

Page 16: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

Proof. From condition (b) of Definition 3.1 there exists xν ∈ Cν → x such that the functions{−F ν(xν , ·) : Dν(xν) → IR, ν ∈ IN} and −F (x, ·) : D(x) → IR satisfy condition (a) of Definition3.7. From condition (a) of Definition 3.1, for any y ∈ D(x) one can find yν ∈ Dν(xν) → ysuch that condition (b) of Definition 3.7 holds for {−F ν(xν , ·) : Dν(xν) → IR, ν ∈ IN} and−F (x, ·) : D(x) → IR. Thus, the former functions epi-converge to the latter.

The sup-projection5 (in y) of F ∈ bfcns(X, Y ) with domain described by (C,D) exists ifsupy∈D(x) F (x, y) < ∞ for some x ∈ C. When it exists, the sup-projection of F is the real-valued function f given by

f(x) = supy∈D(x) F (x, y) whenever x ∈ C and supy∈D(x) F (x, y) < ∞.

This means that the domain of the sup-projection is a subset of C, which could be strict.Bifunctions without sup-projections are somewhat pathological and in the case of minsup prob-lems correspond to infeasibility. Since D(x) = ∅ for x ∈ C, supy∈D(x) F (x, y) > −∞ and, thus,sup-projections are indeed real-valued. We denote by f and f ν sup-projections of F and F ν ,respectively.

It is clear that the sup-projection of F ν with domain defined by (Cν , Dν) might not exist, i.e.,supy∈Dν(x) F

ν(x, y) = ∞ for all x ∈ Cν , even if that of F does and F ν −→lop F as the followingexample shows.

Absence of sup-projection. Consider the bifunctions {F, F ν , ν ∈ IN} ⊂ bfcns(IR, IR) withdomains described by C = Cν = [0, 1], D = Dν = {y ∈ IR : y ≥ 0}, and values F (x, y) = yxand F ν(x, y) = y(x+1/ν) when defined. Clearly, the sup-projection of F has domain {0}. How-ever, there is no ν for which F ν has a sup-projection. This situation takes place even thoughone can show that F ν −→lop F .

A consequence of lopsided convergence for the epigraphs of sup-projections is given next.

4.3 Theorem (containment of sup-projections) For bifunctions {F, F ν , ν ∈ IN} ⊂ bfcns(X, Y ),with corresponding sup-projections {f, f ν , ν ∈ IN}, F ν −→lop F implies that

OutLim(epi f ν

)⊂ epi f.

Proof. Let (C,D) and (Cν , Dν) describe the domains of F and F ν , respectively. Suppose that(x, α) ∈ OutLim(epi f ν). Then there exist N ∈ N#

∞ and {(xν , αν), ν ∈ N}, with xν ∈ Cν ,supy∈Dν(xν) F

ν(xν , y) ≤ αν , xν →N x, and αν →N α. If x ∈ C, then we can construct by condition(a) of Definition 3.1 a sequence yν ∈ Dν(xν) such that F ν(xν , yν)→N ∞. However,

αν ≥ supy∈Dν(xν) Fν(xν , y) ≥ F ν(xν , yν), ν ∈ N,

5In parametric optimization and elsewhere sup-projections of bifunctions are sometimes called optimal valuefunctions.

16

Page 17: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

implies a contradiction since αν →N α ∈ IR. Thus, x ∈ C. If supy∈D(x) F (x, y) = ∞, then thereexists y ∈ D(x) such that F (x, y) ≥ α + 1. Condition (a) of Definition 3.1 ensures that thereexists a sequence yν ∈ Dν(xν)→N y such that liminfν∈N F ν(xν , yν) ≥ F (x, y). Consequently,

α = liminfν∈N αν ≥ liminfν∈N supy∈Dν(xν) Fν(xν , y) ≥ liminfν∈N F ν(xν , yν) ≥ F (x, y) ≥ α + 1,

which is a contradiction. Hence, it suffices to consider the case with supy∈D(x) F (x, y) finite.Given any ε > 0 arbitrarily small, pick yε ∈ D(x) such that F (x, yε) ≥ supy∈D(x) F (x, y) − ε.Then condition (a) of Definition 3.1 again yields yν ∈ Dν(xν)→N yε such that

liminfν∈N supy∈Dν(xν) Fν(xν , y) ≥ liminfν∈N F ν(xν , yν) ≥ F (x, yε) ≥ supy∈D(x) F (x, y)− ε,

implying liminfν∈N supy∈Dν(xν) Fν(xν , y) ≥ supy∈D(x) F (x, y). Since

α = liminfν∈N αν ≥ liminfν∈N supy∈Dν(xν) Fν(xν , y) ≥ supy∈D(x) F (x, y),

the conclusion follows.

It is clear from the following example that lop-convergence of bifunctions does not guaranteeepi-convergence of the corresponding sup-projections.

Absence of tightness. For {F, F ν , ν ∈ IN} ⊂ bfcns(IR, IR) with domains IR × IR and valuesF (x, y) = 0 for (x, y) ∈ IR × IR, and F ν(x, y) = 1 if x ∈ IR and y = ν, and zero otherwise.It is easy to show that F ν −→lop F . The sup-projection of F ν has value 1 for all x ∈ IR andthat of F has value 0 for all x ∈ IR. Thus, the inclusion in Theorem 4.3 is strict. We observethat ancillary-tightness fails in this instance and that property is indeed key to eliminating suchpossibilities. When Y is a space of distribution functions, these questions translates into thoseabout tightness in the sense of probability theory; see the remark after Definition 3.5 and [29].

4.4 Theorem (epi-convergence of sup-projections) Suppose the bifunctions {F, F ν , ν ∈ IN} ⊂bfcns(X, Y ) have corresponding sup-projections {f, f ν , ν ∈ IN}. If F ν lop-converges ancillary-tightly to F , then

epi f ν → epi f, also written f ν →e f.

Proof. Let (C,D) and (Cν , Dν) describe the domains of F and F ν , respectively, and x ∈ dom f .Now, choose xν ∈ Cν → x such that F ν(xν , ·) : Dν(xν) → IR hypo-converge to F (x, ·) :D(x) → IR, cf. Proposition 4.2. In fact, these functions hypo-converge tightly as an immediateconsequence of ancillary-tightness. Thus,

supy∈Dν(xν)

F ν(xν , yν) → supy∈D(x)

F (x, y),

via Theorem 3.8. This fact together with Theorem 4.3 establish the conclusion.

It is clear that ancillary-tight lop-convergence is a stronger condition than epi-convergence ofthe corresponding sup-projections, i.e., the converse of Theorem 4.4 fails. For example, the

17

Page 18: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

bifunctions {F, F ν , ν ∈ IN} ⊂ bfcns(IR, IR) with domains described by C = Cν = [0, 1] andD = Dν = [0, 1], and values F (x, y) = 1, F ν(x, y) = y for all (x, y) ∈ C×D have sup-projectionsthat are identical and, certainly, f ν →e f . However, condition (a) of Definition 3.1 fails and there-fore F ν does not lop-converge to F .

We end this subsection by listing consequences for semicontinuity, concavity, and convexity.

4.5 Proposition (usc and concavity) For bifunctions {F, F ν , ν ∈ IN} ⊂ bfcns(X,Y ) with do-mains described by (C,D) and (Cν , Dν), respectively, and any x ∈ C, the lop-convergenceF ν −→lop F implies that the (univariate) function F (x, ·) : D(x) → IR is usc.

When Y is a linear space and for all xν ∈ Cν → x, F ν(xν , ·) : Dν(xν) → IR is concave, it alsoimplies that F (x, ·) is concave.

Proof. In view of Proposition 4.2, for every x ∈ C, there exists xν ∈ Cν → x such that F ν(xν , ·)hypo-converges to F (x, ·), which therefore must be usc. If F ν(xν , ·) are concave, its hypo-limitmust also be concave, which establishes the conclusions.

4.6 Proposition (lsc and convexity) Suppose the bifunctions {F, F ν , ν ∈ IN} ⊂ bfcns(X, Y )have domains described by (C,D) and (Cν , Dν), respectively, and corresponding sup-projections{f, f ν , ν ∈ IN}. If F ν lop-converges ancillary-tightly to F , then f is lsc and also convex providedthat X is a linear space, Dν is constant on Cν , and F ν(·, y) is convex for all y ∈ Dν .

Proof. In view of Theorem 4.4, f is an epi-limit and thus lsc. Convexity is guaranteed iff ν = supy∈Dν(·) F

ν(·, y), ν ∈ IN , are convex, which holds by the stated assumptions.

4.2 Consequence for Minsup Problems

We have now reached the presentation of the main results of the paper. The minsup-value of abifunction F ∈ bfcns(X, Y ) is defined as

infsupF := infx∈C supy∈D(x) F (x, y),

which clearly is the optimal value of the minsup problem minx∈C supy∈D(x) F (x, y). The corre-sponding ε-optimal solutions, referred to as ε-minsup-points of F , are given by

ε- argminsupF :={x ∈ C : supy∈D(x) F (x, y) ≤ infsupF + ε

}, for ε ≥ 0.

If ε = 0, we simply refer to such points as minsup-points.

4.7 Theorem (bounds on minsup-value) Suppose that {F, F ν , ν ∈ IN} ⊂ bfcns(X,Y ) havesup-projections, F ν −→lop F , and {xν ∈ argminsupF ν , ν ∈ IN} exist. Then, the following hold:

(a) If {xν , ν ∈ N} converges for some N ∈ N#∞, then liminfν∈N(infsupF

ν) ≥ infsupF.

18

Page 19: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

(b) If lop-convergence is ancillary-tight, then limsup(infsupF ν) ≤ infsupF and

∀{εν ↓ 0, ν ∈ IN},OutLim(εν- argminsupF ν) ⊂ argminsupF.

(c) If lop-convergence is ancillary-tight and {xν , ν ∈ N} converges for some N ∈ N#∞, then

limν∈N (infsupF ν) = infsupF .

Proof. Let f and f ν be the sup-projections of F and F ν , respectively. We first consider (a):Let N ∈ N#

∞ and x be such that xν →N x. Theorem 4.3 applies so that OutLim(epi f ν) ⊂ epi f ,which in turn ensures that condition (a) of Definition 3.7 holds. Thus, if x ∈ dom f , thenliminfν∈N(infsupF

ν) = liminfν∈N f ν(xν) ≥ f(x) ≥ infsupF . If x ∈ dom f , then liminfν∈N(infsupFν) =

liminfν∈N f ν(xν) → ∞ and the conclusion holds. Second, consider (b) and (c): Theorem 4.4implies that f ν →e f and thus the conclusion is a direct application of Theorem 3.8.

Tight lop-convergence implies the following strengthening of the result for minsup-points andvalues.

4.8 Theorem (approximating minsup-points) Suppose that {F, F ν , ν ∈ IN} ⊂ bfcns(X, Y )have sup-projections and F ν −→lop F tightly. Then,

(a) infsupF ν → infsupF , which is finite;

(b) for ε > 0, InnLim(ε- argminsupF ν) ⊃ argminsupF ;

(c) for X separable, ∃{εν ↓ 0, ν ∈ IN} such that εν- argminsupF ν → argminsupF .

Proof. The assumptions imply those of Theorem 4.4 and thus the sup-projection of F ν epi-converges to that of F . In view of Definition 3.6, the epi-convergence is tight; see Definition 3.7.The conclusion is then a direct application of Theorem 3.8.

It is well-known that the infimum of a lsc function with a domain contained in a compact set isattained. Consequently, if the sup-projection f of some bifunction F ∈ bfcns(X, Y ) is lsc, whichholds under mild assumptions (cf. Proposition 5.1), and dom f is contained in some compactset, then there exists a minimizer of f and thus also a minsup-point of F . We next state a resultthat relaxes the compactness requirement.

4.9 Theorem (existence of minsup-points) Suppose that {F, F ν , ν ∈ IN} ⊂ bfcns(X, Y ) havesup-projections, those of F ν , denoted by f ν , are lsc, F ν −→lop F ancillary-tightly, and there arecompact sets {Bν ⊂ X, ν ∈ IN} such that dom f ν ⊂ Bν . Then, {xν ∈ argminsupF ν , ν ∈ IN}exist and every cluster point of {xν , ν ∈ IN} is a minsup-point of F .

Proof. The discussion prior to the theorem ensures the existence of minsup-points of F ν forevery ν. The result is then a consequence of Theorem 4.7.

Theorem 4.9 does not ensure the existence of a cluster point of {xν , ν ∈ IN}. Still, it providesan approach for establishing the existence of a minsup-point of F : first construct a sequence{F ν , ν ∈ IN}, with the required properties, that lop-converges ancillary-tightly to F and, second,prove that {xν , ν ∈ IN} has a cluster point.

19

Page 20: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

4.3 Application to Generalized Nash Games

From §2.2, we know that the solutions of a Generalized Nash game are fully characterized by theassociated Nikaido-Isoda bifunction; see Proposition 2.1. Here, we illustrate the application oflopsided convergence in this context and in the process obtain stability results for GeneralizedNash games in metric spaces with agent-constraints that depend on the other agents’ choices ofstrategies.

We consider the

actual game xa ∈ argminxa∈Da(x−a) ca(xa, x−a), for all a ∈ A,

where the notation follows that of §2.2, and the

approximating game xa ∈ argminxa∈Dνa(x−a) c

νa(xa, x−a), for all a ∈ A,

where cνa : Xa × X−a → IR is the approximating cost function and Dνa : X−a →→ Xa is the

approximating set-valued mapping that defines the constraints for agent a ∈ A. The Nikaido-Isoda bifunction associated with the actual game is defined in §2.2. Similarly, the one for theapproximating game is

F ν(x, y) =∑

a∈A

[cνa(xa, x−a)− cνa(ya, x−a)

]for x ∈ Cν ⊂ X, y ∈ Dν(x) ⊂ X

withCν =

{x ∈ X : xa ∈ Dν

a(x−a) for all a ∈ A}and Dν(x) =

∏a∈A

Dνa(x−a).

The set-valued mapping Dν : Cν →→ X. We assume throughout this subsection that there existx, xν ∈ X such that xa ∈ Da(x−a) and xν

a ∈ Dνa(x

ν−a) for all a ∈ A and ν ∈ IN , i.e., there

are feasible solutions. Thus, F, F ν ∈ bfcns(X,X). The approximating Nikaido-Isoda bifunctionlop-converges to the actual bifunction under natural assumptions as seen next. This result andthe associated propositions extend in many ways Corollary 7.5 of [21], which assumes convex andcompact agent-constraint sets that are independent of other agents’ strategies as well as otherassumptions, and also [18], which also adopts the independence assumption and, in addition,constraint sets that are not approximated.

4.10 Theorem (lop-convergence of Nikaido-Isoda bifunctions) For Nikaido-Isoda bifunctions{F, F ν , ν ∈ IN} ⊂ bfcns(X,X) defined in terms of cost functions {ca, cνa, a ∈ A, ν ∈ IN} andconstraint mappings {Da, D

νa , a ∈ A, ν ∈ IN}, we have F ν −→lop F provided that

(a) the mappings Dνa continuously converge to Da, i.e., ∀xν ∈ X → x ∈ X, Dν

a(xν−a) →

Da(x−a) ∀a ∈ A;

(b) the cost functions cνa continuously converge to ca, i.e., ∀xν ∈ X → x ∈ X, cνa(xνa, x

ν−a) →

ca(xa, x−a) ∀a ∈ A; and

20

Page 21: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

(c) either Dνa(x−a) ⊃ Da(x−a) ∀x−a ∈ X−a, a ∈ A, and ν ∈ IN or the mappings Dν

a and Da areconstant for all ν ∈ IN and a ∈ A.

Proof. We obtain lop-convergence directly from Definition 3.1. Let C,Cν , D,Dν be as definedin this subsection and §2.2. Suppose that xν ∈ Cν → x ∈ C and y ∈ D(x). Since Dν

a(xν−a) →

Da(xν−a) for all a ∈ A, Dν(xν) → D(x). This set-convergence guarantees that there exists

yν ∈ Dν(xν) → y. The continuous convergence of cνa to ca carries over to the bifunctions andconsequently F ν(xν , yν) → F (x, y). The first part of Definition 3.1(a) is satisfied. Next, sinceCν = ∩a∈A{x ∈ X : xa ∈ Dν

a(x−a)}, it follows by assumption (a) that

OutLimCν ⊂∩a∈A

OutLim{x ∈ X : xa ∈ Dνa(x−a)} ⊂

∩a∈A

{x ∈ X : xa ∈ Da(x−a)} = C.

Thus, every convergent sequence xν ∈ Cν must have a limit in C and the second part of Definition3.1(a) is automatically satisfied. We then turn to Definition 3.1(b) and let x ∈ C. First, supposethat Dν

a(x−a) ⊃ Da(x−a) ∀x−a ∈ X−a, a ∈ A, and ν ∈ IN ; see assumption (c). Then, set xν = x,which will be in Cν because Cν ⊃ C. Suppose that yν ∈ Dν(x) → y ∈ D(x). Again, assumption(b) ensures that F ν(xν , yν) → F (x, y). This concludes the proof for the first assumption in (c);the possibility yν ∈ Dν(x) → y ∈ D(x) can be ruled out by assumption (a). Second, supposethat the mappings Dν

a and Da are constant for all ν ∈ IN and a ∈ A; these constant sets are alsodenoted by Dν

a and Da. Since in this case C =∏

a∈A Da and Cν =∏

a∈ADνa , C

ν → C and thereexists xν ∈ Cν → x. Suppose that yν ∈ Dν(xν) → y ∈ D(x). Again, assumption (b) ensuresthat F ν(xν , yν) → F (x, y). This concludes the proof as the possibility yν ∈ Dν(x) → y ∈ D(x)can again be ruled out.

It is possible to relax the assumptions of Theorem 4.10. For example in (a), the continuousconvergence may be only relative to smaller sets such as for xν ∈ Cν → x ∈ C. However, sinceCν and C are not easily characterized from Dν

a and Da, we settled on the stated, more easilychecked, assumptions. The continuous convergence of the cost functions (assumption (b)) is usu-ally unavoidable (see the discussion in [21, Section 5] for the context of variational inequalities),but fortunately it is often satisfied in the applications we have in mind.

Although other options exist, it is generally impossible to completely eliminate some “constraintqualification” of the kind stated in assumption (c) of Theorem 4.10. The source of the difficultyis that assumption (a) fails to guarantee that Cν → C. Although OutLimCν ⊂ C, as leveragedin the above proof, the inclusion may be strict.

Absence of constraint qualification. Suppose that A = {1, 2}, X = IR × IR, and for allx ∈ X, a ∈ A, and ν ∈ IN ,

Da(x−a) = {0} ∪ {x−a} and Dνa(x−a) = {0} ∪ {x−a + 1/ν}.

In this case, C = {x ∈ IR× IR : x1 = x2} and Cν = {(0, 0), (0, 1/ν), (1/ν, 0)} for all ν. Clearly,OutLimCν = {(0, 0)} is a strict subset of C. This takes place even though assumption (a)

21

Page 22: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

of Theorem 4.10 holds. However, as indicated in the second alternative in assumption (c) ofTheorem 4.10, this difficulty evaporates when Dν and D are constant set-valued mappings, i.e.,an agent’s constraint set is independent of the other agents’ strategies.

Theorem 4.10 provides a main step in sensitivity analysis and study of approximations andissues related to existence of solutions for Generalized Nash games. Leveraging Theorems 4.7,4.8,and 4.9, it is easy to develop a variety of results, possibly bringing in ancillary-tightness andtightness. We give two results that also permit arguments directly from the definition of lopsidedconvergence.

4.11 Proposition (convergence of equilibria) For Nikaido-Isoda bifunctions {F, F ν , ν ∈ IN} ⊂bfcns(X,X) associated with the actual and approximating games, respectively, suppose thatF ν −→lop F and {xν , ν ∈ IN} are equilibria of the approximating games. Then, every cluster pointx of {xν , ν ∈ IN} is an equilibrium of the actual game.

Proof. By Proposition 2.1, xν ∈ argminsupF ν and f ν(xν) ≤ 0, where f ν is the sup-projectionof F ν . Let N ∈ N#

∞ be such that xν →N x and C,Cν , D,Dν be as defined in this subsection and§2.2. Suppose that x ∈ C. Then, by part (a) of Definition 3.1, there exists yν ∈ Dν(xν) suchthat F ν(xν , yν)→N ∞. However, 0 ≥ f ν(xν) ≥ F ν(xν , yν) for ν ∈ N , which is a contradiction.Thus, x ∈ C. Next, suppose that supy∈D(x) F (x, y) > 0. Then, there exists y ∈ D(x) suchthat F (x, y) > 0. Again by part (a) of Definition 3.1, there exists yν ∈ Dν(xν)→N y such thatliminfν∈N F ν(xν , yν) ≥ F (x, y). As argued above, the left-hand side is nonpositive and anothercontradiction is reached. Therefore, supy∈D(x) F (x, y) ≤ 0. For all x ∈ C, x ∈ D(x) andF (x, x) = 0. Consequently, supy∈D(x) F (x, y) ≥ 0, which establishes that x ∈ argminsupF . ByProposition 2.1, x is an equilibrium of the actual game.

The next proposition resembles a result in [22].

4.12 Proposition (convergence of equilibria; specifics) Suppose that the actual and approx-imating games are given by cost functions {ca, cνa, a ∈ A, ν ∈ IN} and constraint mappings{Da, D

νa , a ∈ A, ν ∈ IN}, with

(a) Dνa continuously converge to Da, i.e., ∀xν ∈ X → x ∈ X, Dν

a(xν−a) → Da(x−a) ∀a ∈ A; and

(b) cνa continuously converge to ca, i.e., ∀xν ∈ X → x ∈ X, cνa(xνa, x

ν−a) → ca(xa, x−a) ∀a ∈ A.

If {xν , ν ∈ IN} are equilibria of the approximating games, then every cluster point x of {xν , ν ∈IN} is an equilibrium of the actual game.

Proof. Examining the proof of Theorem 4.10, we find that the present assumptions suffice toestablish part (a) of Definition 3.1. Since part (b) of that definition is not utilized in the proofof Proposition 4.11, the arguments there carry over.

The existence of equilibria for the approximating games are assumed in these proposition. Al-though nontrivial in general, Theorem 4.9 provides a path to such existence results; see also [21]

22

Page 23: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

for approaches based on Ky Fan bifunctions.

For computational reasons, an agent’s constraints may be removed and replaced with a penaltyin the objective function. This natural idea (see for example [17, 16, 15]) also leads to lop-convergence under mild assumptions. Specifically, for some ga : Xa × X−a → IRm, a ∈ A,suppose that

Da(x−a) = {xa ∈ Xa : ga(xa, x−a) ≤ 0} for x−a ∈ X−a,

which together with the cost functions ca define the actual game in this case. For ρν ≥ 0, theapproximating game has for all a ∈ A,

cνa(xa, x−a) = ca(xa, x−a) + ρνga(xa, x−a)+,

where v+ = max{0, v1, ..., vm} for v ∈ IRm, and Dνa(x−a) = Xa, i.e., in the approximating

game, the agents are solving unconstrained problems, but with penalized cost functions. Thecorresponding Nikaido-Isoda bifunctions are given as

F (x, y) =∑

a∈A

[ca(xa, x−a)− ca(ya, x−a)

]for x ∈ C = {x ∈ X : ga(xa, x−a) ≤ 0 ∀a ∈ A} and y ∈ D(x) =

∏a∈A

{ya ∈ Xa : ga(ya, x−a) ≤ 0}

F ν(x, y) =∑

a∈A

[ca(xa, x−a) + ρνga(xa, x−a)

+ − ca(ya, x−a)− ρνga(ya, x−a)+]

for x ∈ Cν = X and y ∈ Dν(x) = X.

The latter bifunctions lop-converge to the former under mild assumption.

4.13 Proposition (lop-convergence in penalty methods) For continuous ca : Xa × X−a → IRand ga : Xa ×X−a → IRm, a ∈ A, suppose that

(a) for all x ∈ X there exists y ∈ X with ga(ya, x−a) ≤ 0 ∀a ∈ A; and

(b) for all x, y ∈ X with ga(ya, x−a) ≤ 0 ∀a ∈ A, there exists yν ∈ X → y such that ga(yνa , x−a) <

0 ∀a ∈ A and ν ∈ IN .

If F is the Nikaido-Isoda bifunction defined by {(ca, Da), a ∈ A}, withDa(·) = {xa ∈ Xa : ga(xa, ·)≤ 0}, and F ν is the Nikaido-Isoda bifunction defined by {(ca+ρνg+a , Xa), a ∈ A}, then F ν −→lop Fprovided that ρν → ∞.

Proof. We start by establishing that assumption (b), in conjunction with continuity, impliesthe seemingly stronger property that for all xν ∈ X → x ∈ X and y ∈ X with ga(ya, x−a) ≤0 ∀a ∈ A, there exists yν ∈ X → y such that ga(y

νa , x

ν−a) ≤ 0 ∀a ∈ A. Let x, y ∈ X with

ga(ya, x−a) ≤ 0 ∀a ∈ A and xν ∈ X → x. By assumption (b), there exists yk ∈ X → y such thatga(y

ka , x−a) < 0 ∀a ∈ A and k ∈ IN . By the continuity of ga, there exists {εk > 0, k ∈ IN} such

thatga(y

ka , x−a) ≤ 0 for all a ∈ A and x ∈ IB(x, εk) = {x ∈ X : dX(x, x) ≤ εk),

23

Page 24: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

where dX is the metric on X. Since xν → x, there exists ν1 ∈ IN such that xν ∈ IB(x, ε1) for allν ≥ ν1. Moreover, for k = 2, 3, ..., there exists νk > νk−1 such that xν ∈ IB(x, εk) for all ν ≥ νk.We then construct the sequence {yν , ν ∈ IN, ν ≥ ν1} by setting

yν = yk for νk ≤ ν < νk+1, k ∈ IN.

Since yk → y, yν → y. Thus, for all k ∈ IN , a ∈ A, and νk ≤ ν < νk+1,

ga(yνa , x

ν−a) = ga(y

ka , x

ν−a) ≤ 0

because xν ∈ IB(x, εk). This establishes the claimed property.

We proceed by showing lop-convergence directly from Definition 3.1. Suppose that xν ∈ Cν →x ∈ C and y ∈ D(x). By the property just established, there exists yν ∈ X → y such thatga(y

νa , x

ν−a)

+ = 0 ∀a ∈ A. Hence,

liminf F ν(xν , yν) = liminf∑

a∈A

[ca(x

νa, x

ν−a) + ρνga(x

νa, x

ν−a)

+ − ca(yνa , x

ν−a)

]≥ liminf

∑a∈A

[ca(x

νa, x

ν−a)− ca(y

νa , x

ν−a)

]= F (x, y).

Next, suppose that xν ∈ Cν → x ∈ C. By assumption (a) and the established property, thereexist y ∈ X and yν ∈ X → y with ga(y

νa , x

ν−a)

+ = 0 ∀a ∈ A. We then have that

F ν(xν , yν) =∑

a∈A

[ca(x

νa, x

ν−a) + ρνga(x

νa, x

ν−a)

+ − ca(yνa , x

ν−a)

]→ ∞

because ρν → ∞ and∑

a∈A ga(xνa, x

ν−a)

+ remains bounded away from zero as ν → ∞ by virtueof x ∈ C and the continuity of ga for all a ∈ A. Consequently, part (a) of Definition 3.1 holds.Next, suppose that x ∈ C and set xν = x, which implies that ga(xa, x−a)

+ = 0 for all a ∈ A. Ifyν ∈ X → y ∈ D(x), then

limsupF ν(xν , yν) = limsup∑

a∈A

[ca(xa, x−a)− ca(y

νa , x−a)− ρνga(y

νa , x−a)

+]

≤ limsup∑

a∈A

[ca(xa, x−a)− ca(y

νa , x−a)

]= F (x, y).

Otherwise, if yν ∈ X → y ∈ D(x), then we have that

F ν(xν , yν) =∑

a∈A

[ca(xa, x−a)− ca(y

νa , x−a)− ρνga(y

νa , x−a)

+]→ −∞

because ρν → ∞ and∑

a∈A ga(yνa , x−a)

+ remains bounded away from zero as ν → ∞ due to thefact that y ∈ D(x). This establishes part (b) of Definition 3.1 and the proof is complete.

Again, a constraint qualification enters; see assumption (b) in Proposition 4.13. In essence, weassume that given strategies of others for which an agent has a feasible response, the agent

24

Page 25: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

can unilaterally make its constraints inactive by moving slightly away from the response. Thecondition resembles the familiar Slater condition and other once related to the existence ofnonempty interiors of feasible sets. It is apparent that Proposition 4.13 in conjunction withProposition 4.11 can be used to establish convergence of algorithmic approaches that rely onpenalization of constraints and the solution of the resulting unconstrained game; see [17, 16, 15]for results in this direction, though under somewhat different assumptions. The proposition isan example of lop-convergence without the approximating constraints ever approaching those ofthe actual game.

5 Quantification of Lopsided Convergence

We next turn to a definition of “distance” between bifunctions that in some sense characterizeslop-convergence. We would like the distance between F ν and F to tend to zero if and only if F ν

lop-converges to F . In this section, we develop for the first time such a distance, which essentiallycharacterizes lop-convergence after passing to certain equivalence classes of bifunctions. We beginwith establishing a foundation regarding a distance between (univariate) functions.

5.1 Attouch-Wets Distance between Functions

We limit the scope to the subset of fcns(X) consisting of lsc functions and let

lsc-fcns(X) := {h ∈ fcns(X) : h lsc}.

This set is equipped with the Attouch-Wets (aw) distance dlaw, which is given for a fixed butarbitrary point6 x ∈ X. Specifically, the aw-distance is defined for any h, g ∈ lsc-fcns(X) as

dlaw(h, g) :=

∫ ∞

0

dlawρ (h, g)e−ρdρ,

where the ρ-aw-distance, for any ρ ≥ 0 is given by

dlawρ (h, g) := sup{∣∣dist ((x, α), epih)− dist

((x, α), epi g

)∣∣ : dX(x, x) ≤ ρ, |α| ≤ ρ}

anddist

((x, α), epih

):= inf {max{dX(x, x′), |α− α′|} : (x′, α′) ∈ epih} ,

with a similar expression pertaining to epi g. We observe that x serves as the center of a ball onwhich dlawρ is computed. The aw-distance defined here is essentially that pioneered in [5], whichin turn can trace its origin from the study of convex cones in [32] and convex sets in [23]; seealso [9, 26].

6Although the topology induced by the aw-distance is the same for any point selected, the value of the distancewill depend on this choice; see [5, 27] and [26, Section 7.J] for details.

25

Page 26: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

It is immediate that dlaw is a metric on lsc-fcns(X). We deduce from [9, Theorem 3.1.7] thatfor hν , h ∈ lsc-fcns(X),

dlaw(hν , h) → 0 =⇒ hν →e h.

If (X, dX) is a finitely compact7 metric space, then the converse also holds in view of [9, Theorems3.1.4, 5.1.10, 5.2.10]; see also [27].

5.2 Lop-Distance

Parallel to the restriction to lsc-fcns(X) in the definition of dlaw, we focus on bifunctions withlsc sup-projections. Specifically, let

lsc-bfcns(X, Y ) :={F ∈ bfcns(X, Y ) with lsc sup-projection

}.

In view of Proposition 4.6, the sup-projection of a bifunction to which a sequence of bifunctionslop-converge ancillary-tightly must be lsc. In fact, it is well-known that sup-projections are lscunder mild assumptions. Sufficient conditions are given next, a result essentially in [6, Theorem2, Section 2.5.2] but proved here for completeness.

5.1 Proposition (lsc sup-projection) When it exists, the sup-projection f of a bifunction F ∈bfcns(X, Y ), with domain described (C,D), is lsc if either

(a) D is inner semicontinuous8 and F is lsc; or

(b) D ⊂ Y , i.e., the set-valued mapping is constant, and F (·, y) is lsc for all y ∈ D.

Proof. Let xν ∈ dom f → x ∈ dom f . Then supy∈D(x) F (x, y) is finite and for every ε > 0 thereis a yε ∈ D(x) such that supy∈D(x) F (x, y) ≤ F (x, yε) + ε. Moreover, the inner semicontinuity ofD implies that there exists a sequence yνε ∈ D(xν) → yε. Since F is lsc, there exists ν such thatfor all ν ≥ ν, F (xν , yνε ) ≥ F (x, yε)− ε. Thus, for ν ≥ ν,

supy∈D(xν) F (xν , y) ≥ F (xν , yνε ) ≥ F (x, yε)− ε ≥ supy∈D(x) F (x, y)− 2ε.

Since ε is arbitrary, liminf supy∈D(xν) F (xν , y) ≥ supy∈D(x) F (x, y). Next, suppose that xν ∈dom f → x ∈ dom f . We consider two cases. First, let x ∈ C. Then, there exist yν ∈D(xν) such that F (xν , yν) → ∞. Thus, supy∈D(xν) F (xν , y) → ∞. Second, let x ∈ C, butsupy∈D(x) F (x, y) = ∞. Then, for every M < ∞ there is a yM ∈ D(x) such that F (x, yM) > M .Following a similar argument as earlier, we find that liminf supy∈D(xν) F (xν , y) > M . Since Mis arbitrary, the first conclusion follows. In the second case with D constant, the sequence {yνε}can be selected to coincide with yε for all ν and lsc in x only suffices.

7A metric space is finitely compact if all its closed balls are compact.8D : C →→ Y is inner semicontinuous if for every xν ∈ C → x ∈ C, InnLimD(xν) ⊃ D(x).

26

Page 27: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

5.2 Definition (lop-distance) For any bifunctions F, F ′ ∈ lsc-bfcns(X, Y ), with sup-projectionsf, f ′, the lop-distance

dllop(F, F ′) := dlaw

(f, f ′).

Applications and estimates of the lop-distances are in their infancy. A first attempt in the con-text of minsup problems with X = IRn, especially for optimization under stochastic ambiguity,can be found in [29], which leverages the present paper. There we see that the lop-distancebetween two bifunctions directly relates to the difference between their minsup-values.

It is immediately clear that the lop-distance is only a pseudometric on lsc-bfcns(X, Y ) as thereare nonidentical bifunctions with the same sup-projection. Thus, we pass to equivalence classes.

5.3 Definition (equivalence classes of bifunctions) Bifunctions F, F ′ ∈ lsc-bfcns(X, Y ) areequivalent, denoted by F ∼ F ′, if their sup-projections exist and are identical.

For bifunctions F, F ′ ∈ lsc-bfcns(X,Y ) with domains described by (C,D) and (C ′, D′), respec-tively, we might have F ∼ F ′ even if C = C ′ as long as every x ∈ C that is not in C ′ hassupy∈D(x) F (x, y) = ∞ and every x ∈ C ′ that is not in C has supy∈D′(x) F

′(x, y) = ∞. Thus, thedomains of the sup-projections can still coincide. Of course, the values of the sup-projectionsmust also be the same. It is clear that dllop is a metric on the quotient set of lsc-bfcns(X, Y ).

5.4 Theorem (quantification of lop-convergence) For {F, F ν , ν ∈ IN} ⊂ lsc-bfcns(X, Y ), thefollowing hold:

(a) If dllop(F ν , F ) → 0, then there exist F ν , F ∈ lsc-bfcns(X, Y ) such that

F ν ∼ F ν , F ∼ F, and F ν −→lop F .

(b) If F ν −→lop F ancillary-tightly and (X, dX) is finitely compact, then dllop(F ν , F ) → 0.

Proof. Since F, F ν ∈ lsc-bfcns(X, Y ), the corresponding sup-projections f, f ν exist. For part

(a), set C = dom f , which is nonempty, and D = Y . We then define the bifunction F : C×Y →IR have values F (x, y) = f(x) for x ∈ C and y ∈ Y . By construction, for x ∈ C,

supy∈Y F (x, y) = supy∈Y supy′∈D(x) F (x, y′) = f(x) < ∞,

whereD describes the domain of F . Consequently, f , the sup-projection of F , exists and dom f =C. Moreover, f = f on C. Thus, F ∈ lsc-bfcns(X, Y ) and F ∼ F . An identical construction

using the sup-projection f ν of F ν and setting Cν = dom f ν , Dν = Y , and F ν : Cν × Y → IRhaving F ν(x, y) = f ν(x) for x ∈ Cν and y ∈ Y results in F ν ∈ lsc-bfcns(X, Y ) and F ν ∼ F ν .We then obtain for any y ∈ Y that

dlaw(F ν(·, y), F (·, y)

)= dlaw(f ν , f) = dllop

(F ν , F

)→ 0.

27

Page 28: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

Since dlaw(F ν(·, y), F (·, y)) → 0 implies that F ν(·, y) : Cν → IR epi-converges to F (·, y) : C → IR,

it follows from Proposition 4.1 that F ν lop-converges to F .

For part (b) we conclude from Theorem 4.4 that f ν epi-converges to f . Thus, dlaw(f ν , f) → 0under the additional assumption that (X, dX) is finitely compact and the result follows from thedefinition of the lop-distance.

We recall that ancillary-tightness is ensured, for example, by some compactness property onthe sets over which the inner maximization is taking place. Thus, a restriction to bifunctionssatisfying such properties would ensure that the lop-distance fully characterizes lopsided conver-gence on this class provided that (X, dX) is finitely compact. The lop-distance generates whatwe define as the lop-topology on the space of equivalence classes of such bifunctions.

Acknowledgement. This material is based upon work supported in part by DARPA undergrant HR0011-14-1-0060 and the U. S. Office of Naval Research under grant number N00014-17-2372.

References

[1] H. Attouch. Variational Convergence for Functions and Operators. Applicable MathematicsSciences. Pitman, 1984.

[2] H. Attouch, D. Aze, and R. J-B Wets. Convergence of convex-concave saddle functions: con-tinuity properties of the Legendre-Fenchel transform with applications to convex program-ming and mechanics. Annales de l’Institut H. Poincare: Analyse Nonlineaire, 5:537–572,1988.

[3] H. Attouch and R. J-B Wets. Convergence des points min/sup et de points fixes. ComptesRendus de l’Academie des Sciences de Paris, 296:657–660, 1983.

[4] H. Attouch and R. J-B Wets. A convergence theory for saddle functions. Transactions ofthe American Mathematical Society, 280(1):1–41, 1983.

[5] H. Attouch and R. J-B Wets. Quantitative stability of variational systems: I. the epi-graphical distance. Transactions of the American Mathematical Society, 328(2):695–729,1991.

[6] J.-P. Aubin. Mathematical Methods of Game and Economic Theory. North-Holland, 1982.

[7] J.-P. Aubin and I. Ekeland. Applied Nonlinear Analysis. Issue 1237 of Pure and appliedmathematics. Wiley, 1984.

[8] J.-P. Aubin and H. Frankowska. Set-Valued Analysis. Birkhauser, 1990.

28

Page 29: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

[9] G. Beer. Topologies on Closed and Closed Convex Sets, volume 268 of Mathematics and itsApplications. Kluwer, 1992.

[10] G. Beer, R. T. Rockafellar, and R. J-B Wets. A characterization of epi-convergence in termsof convergence of level sets. Proceedings of the American Mathematical Society, 116(3):753–761, 1992.

[11] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski. Robust Optimization. Princeton UniversityPress, 2009.

[12] J. Deride, A. Jofre, and R. J-B Wets. Solving deterministic and stochastic equilibriumproblems via augmented Walrasian. Computational Economics, to appear, 2018.

[13] L. El Ghaoui and G. C. Calafiore. Optimization Models. Cambridge University Press, 2014.

[14] F. Facchinei and C. Kanzow. Generalized Nash equilibrium problems. Annals of OperationsResearch, 175(1):177–221, 2010.

[15] F. Facchinei and C. Kanzow. Penalty methods for the solution of generalized Nash equilib-rium problems. SIAM J. Optimization, 20(5):2228–2253, 2010.

[16] F. Facchinei and J.-S. Pang. Exact penalty functions for generalized Nash problems. In G. DiPillo and M. Roma, editors, Large-scale nonlinear optimization, pages 115–126. Springer,2006.

[17] M. Fukushima and J.-S. Pang. Quasi-variational inequalities, generalized Nash equilibria,and multi-leader-follower games. Computational Management Science, 2:21–56, 2005.

[18] G. Gurkan and J.-S. Pang. Approximations of Nash equilibria. Mathematical ProgrammingB, 117:223–253, 2009.

[19] A. Jofre and R. J-B Wets. Continuity properties of Walras equilibrium points. Annals ofOperations Research, 114:229–243, 2002.

[20] A. Jofre and R. J-B Wets. Variational convergence of bivariate functions: Lopsided conver-gence. Mathematical Programming B, 116:275—295, 2009.

[21] A. Jofre and R. J-B Wets. Variational convergence of bifunctions: Motivating applications.SIAM J. Optimization, 24(4):1952–1979, 2014.

[22] J. Morgan and V. Scalzo. Variational stability of social Nash equilibria. International GameTheory Review, 10(1):17–24, 2008.

[23] U. Mosco. Convergence of convex sets and of solutions of variational inequalities. Advancesin Mathematics, 3(4):510–585, 1969.

29

Page 30: Lopsided Convergence: an Extension and its Quanti cation ... · measures, and Generalized semi ... outer problems and indeed symmetry in the convex/concave case. Of course, our new

[24] H. Nikaido and K. Isoda. Note on noncooperative convex games. Pacific J. Mathematics,5:807–815, 1955.

[25] R. T. Rockafellar. Convex Analysis. Vol. 28 of Princeton Math. Series. Princeton UniversityPress, 1970.

[26] R. T. Rockafellar and R. J-B Wets. Variational Analysis, volume 317 of Grundlehren derMathematischen Wissenschaft. Springer, 3rd printing-2009 edition, 1998.

[27] J. O. Royset. Approximations and solution estimates in optimization. Mathematical Pro-gramming, https://doi.org/10.1007/s10107-017-1165-0, 2018.

[28] J. O. Royset and R. J-B Wets. Optimality functions and lopsided convergence. J. Opti-mization Theory and Applications, 169(3):965–983, 2016.

[29] J. O. Royset and R. J-B Wets. Variational theory for optimization under stochastic ambi-guity. SIAM J. Optimization, 27(2):1118–1149, 2017.

[30] O. Stein. Bi-level Strategies in Semi-Infinite Programming. Kluwer Academic, 2003.

[31] H. Sun and H. Xu. Convergence analysis for distributionally robust optimization and equi-librium problems. Mathematics of Operations Research, 41:377–401, 2015.

[32] D. Walkup and R. J-B Wets. Continuity of some convex-cone-valued mappings. Proceedingsof the American Mathematical Society, 18:229–235, 1967.

[33] J. Zhang, H. Xu, and L. Zhang. Quantitative stability analysis for distributionally robustoptimization with moment constraints. SIAM J. Optimization, 26(3):1855–1882, 2016.

30