16
Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

  • View
    217

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

Incentive Compatible Regression LearningOfer Dekel, Felix A. Fischer and Ariel D. Procaccia

Page 2: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

Lecture Outline

• Until now: applications of learning to game theory. Now: merge.

• The model:– Motivation– The learning game

• Three levels of generality:– Distributions which are degenerate at one

point– Uniform distributions– The general setting

Model Degenerate Uniform General

Page 3: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

Motivation

• Internet search company: improve performance by learning ranking function from examples.

• Ranking function assigns real value to every (query,answer).

• Employ experts to evaluate examples. • Different experts may have diff.

interests and diff. ideas of good output. • Conflict Manipulation Bias in

training set.

Model Degenerate Uniform General

Page 4: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

Jaguar vs. Panthera Onca

(“Ja

gu

ar”

, ja

gu

ar.

com

)

Model Degenerate Uniform General

Page 5: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

Regression Learning

• Input space X=Rk ((query,answer) pairs).• Function class F:XR (ranking functions). • Target function o:XR.• Distribution over X.• Loss function l (a,b).

– Abs. loss: l (a,b)=|a-b|.– Squared loss: l (a,b)=(a-b)2.

• Learning process: – Given: Training set S={(xi,o(xi))}, i=1,...,m, xi

sampled from .

– R(h)=Ex[l (h(x),o(x))].

– Find: hF to minimize R(h).

Model Degenerate Uniform General

Page 6: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

Our Setting

• Input space X=Rk ((query,answer) pairs).

• Function class F (ranking functions). • Set of players N={1,...,n} (experts).

• Target functions oi:XR.

• Distributions i over X.

• Training set?

Model Degenerate Uniform General

Page 7: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

The Learning Game

i: controls xij, j=1,...,m, sampled w.r.t. i (common knowledge).

• Private info of i: oi(xij)=yij, j=1,...,m.

• Strategies of i: y’ij, j=1,...,m.

• h is obtained by learning S={(xij,y’ij)}

• Cost of i: Ri(h)=Exi [l (h(x),oi(x))].

• Goal: Social Welfare (please avg. player).

Model Degenerate Uniform General

Page 8: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

Example: The learning game with ERM• Parameters: X=R, F=Constant Functions, l (a,b)=|a-

b|, N={1,2}, o1(x)=1, o2(x)=2, 1=2=uniform dist on [0,1000].

• Learning algorithm: Empirical Risk Minimization (ERM)

– Minimize R’(h,S)=1/|S| (x,y)Sl (h(x),y).

1

2

Model Degenerate Uniform General

Page 9: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

Degenerate Distributions: ERM with abs. loss

• The Game:– Players: N={1,...n} i: degenerate at xi.

i: controls xi.

– Private info of i: oi(xi)=yi.

– Strategies of i: y’i.

– Cost of i: Ri(h)= l (h(xi),yi).

• Theorem: If l = absolute loss and F is convex. Then ERM is group incentive compatible.

Model Degenerate Uniform General

Page 10: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

ERM with superlinear loss

• Theorem: l is “superlinear”, F is convex, |F|2, F is not “full” on x1,...,xn. Then y1,...,yn such that there is incentive to lie.

• Example: X=R, F=Constant Functions, l (a,b)=(a-b)2, N={1,2}.

Model Degenerate Uniform General

Page 11: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

Uniform dist. over samples

• The Game:– Players: N={1,...n} i: Discrete uniform on {xi1,...,xim}

i: controls xij, j=1,...,m

– Private info of i: oi(xij)=yij.

– Strategies of i: y’ij, j=1,...,m.

– Cost of i: Ri(h)= R’i(h,S)= 1/mjl (h(xij),yij).

Model Degenerate Uniform General

Page 12: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

ERM with abs. loss is not ICModel Degenerate Uniform General

1

0

Page 13: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

VCG to the Rescue

• Use ERM.

• Each player pays jiR’j(h,S).

• Each player’s total cost is R’i(h,S)+jiRj’(h,S) = jR’j(h,S).

• Truthful for any loss function.• VCG has many faults:

– Not group incentive compatible. – Payments problematic in practice.

• Would like (group) IC mechanisms w/o payments.

Model Degenerate Uniform General

Page 14: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

Mechanisms w/o Payments

• Absolute loss. -approximation mechanism: gives an -

approximation of the social welfare.• Theorem (upper bound): There exists a

group IC 3-approx mechanism for constant functions over Rk and homogeneous linear functions over R.

• Theorem (lower bound): There is no IC (3-)-approx mechanism for constant/hom. lin. functions over Rk.

• Conjecture: There is no IC mechanism with bounded approx. ratio for hom. lin. functions over Rk, k2.

Model Degenerate Uniform General

Page 15: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

Generalization

• Theorem: If f,– (1) i, |R’i(f,S)-Ri(f)| /2 – (2) |R’(f,S)-1/ni Ri(f)| /2

Then:– (Group) IC in uniform -(group) IC in general. -approx in uniform -approx up to additive

in general. • If F has bounded complexity,

m=(log(1/)/), then cond. (1) holds with prob. 1-.

• Cond. (2) is obtained if (1) occurs for all i. Taking /n adds factor of logn.

Model Degenerate Uniform General

Page 16: Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

Discussion

• Given m large enough, with prob. 1- VCG is -truthful. This holds for any loss function.

• Given m large enough, abs loss, mechanism w/o payments which is -group IC and 3-approx for constant functions and hom. lin. functions.

• Most important direction for future work: extending to other models of learning, such as classification.

Model Degenerate Uniform General