24
Logistic regression 10-701 Machine Learning

15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Logistic regression

10-701

Machine Learning

Page 2: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Back to classification

1. Instance based classifiers

- Use observation directly (no models)

- e.g. K nearest neighbors

2. Generative:

- build a generative statistical model

- e.g., Bayesian networks

3. Discriminative

- directly estimate a decision rule/boundary

- e.g., decision tree

Page 3: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Generative vs. discriminative

classifiers • When using generative classifiers we relied on all points

to learn the generative model

• When using discriminative classifiers we mainly care

about the boundary

X

Y

Discriminative model

X

Y

Generative model

Page 4: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Regression for classification

• In some cases we can use linear regression for determining the

appropriate boundary.

• However, since the output is usually binary or discrete there are

more efficient regression methods

• Recall that for classification we are interested in the conditional

probability p(y | X ; ) where are the parameters of our model

• When using regression represents the values of our regression

coefficients (w).

Page 5: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Regression for classification

• Assume we would like to use linear regression to learn the

parameters for p(y | X ; )

• Problems?

1

-1

Optimal regression

model

wTX 0 classify as 1

wTX < 0 classify as -1

Page 6: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

The sigmoid function

• To classify using regression models

we replace the linear function with the

sigmoid function:

• Using the sigmoid we set (for binary

classification problems)

g(h) 1

1 eh

);|( Xyp

Always between 0

and 1

XeXgXyp

Tw

T

1

1)w();|0(

X

X

e

eXgXyp

T

T

w

wT

1)w(1);|1(

Page 7: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

The sigmoid function

• To classify using regression models

we replace the linear function with the

sigmoid function:

• Using the sigmoid we set (for binary

classification problems)

g(h) 1

1 eh

);|( Xyp

XeXgXyp

Tw

T

1

1)w();|0(

X

X

e

eXgXyp

T

T

w

wT

1)w(1);|1(

Note that we are defining the

probabilities in terms of p(y|X). No need to use Bayes rule here!

Page 8: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Logistic regression vs. Linear

regression

XeXgXyp

Tw

T

1

1)w();|0(

X

X

e

eXgXyp

T

T

w

wT

1)w(1);|1(

Page 9: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Determining parameters for logistic

regression problems

i

y

i

y

iii wXgwXgwXyL)1(

);());(1();|(

• So how do we learn the parameters?

• Similar to other regression problems

we look for the MLE for w

• The likelihood of the data given the

model is:

XewXgXyp

Tw1

1);();|0(

X

X

e

ewXgXyp

T

T

w

w

1);(1);|1(

Defining a new function, g

Page 10: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Solving logistic regression

problems

N

i

Xw

ii

N

i i

i

ii

N

i iiii

iT

eXy

wXgwXg

wXgy

wXgywXgywXyLL

1

T

1

1

)1ln(w

);(ln);(

);(1ln

);(ln)1());(1ln();|(

• The likelihood of the data is:

• Taking the log we get:

XewXg

Tw1

1);(

X

X

e

ewXg

T

T

w

w

1);(1

i

y

i

y

iii wXgwXgwXyL)1(

);());(1();|(

Page 11: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Maximum likelihood estimation

N

i i

ij

i

N

i i

j

i

N

i

X

ijj

wXypyX

wXgyX

eXyw

wlw

i

i

ii

1

1

1

wT

)};|1({

))};(1({

)}1ln(w{)(T

Bad news: No close

form solution!

Good news: Concave

function

Taking the partial derivative w.r.t.

each component of the w vector

XewXg

Tw1

1);(

X

X

e

ewXg

T

T

w

w

1);(1

Page 12: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Gradient ascent

z=x(y-g(w;x))

w

Slope = z/ w

z

w

• Going in the direction to the slope will lead to a larger z

• But not too much, otherwise we would go beyond the

optimal w

Page 13: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Gradient descent

z=(f(w)-y)2

w

Slope = z/ w

z

w

• Going in the opposite direction to the slope will lead to

a smaller z

• But not too much, otherwise we would go beyond the

optimal w

Page 14: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Gradient ascent for logistic

regression

N

i ii

j

ijwXgyXwl

w 1))};(1({)(

We use the gradient to adjust the value of w:

Where is a (small) constant

N

i ii

j

i

jj wXgyXww1

))};(1({

Page 15: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Algorithm for logistic regression

1. Chose

2. Start with a guess for w

3. For all j set

4. If no improvement for

stop. Otherwise go to step 3

Example

N

i ii

j

i

jj wXgyXww1

))};(1({

N

i iiii wXgywXgywXyLL1

);(ln)1());(1ln();|(

Page 16: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Regularization

• Similar to other data estimation problems, we may not have enough

samples to learn good models for logistic regression classification

• One way to overcome this is to ‘regularize’ the model, impose

additional constraints on the parameters we are fitting.

• For example, lets assume that wi comes from a Gaussian

distribution with mean 0 and variance 2 (where 2 is a user defined

parameter): wj~N(0, 2)

• In that case we have a prior on the parameters and so:

)();|1()|,1( pXypXyp

Page 17: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Regularization

• If we regularize the parameters we need to take the prior into

account when computing the posterior for our parameters

• Here we use a Gaussian model for the prior.

• Thus, the log likelihood changes to :

• And the new update rule (after taking the derivative w.r.t. wi) is:

j

jN

i

Xw

ii

weXyXwyLL i

T

2

2

1

T

2

)()1ln(w)|;(

21))};(1({

jN

i ii

j

i

jj wwXgyXww

Assuming mean

of 0 and

removing terms

that are not

dependent on w

The variance of

our prior model Also known as the MAP

estimate

)();|1()|,1( pXypXyp

Page 18: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Regularization

• There are many other ways to regularize logistic regression

• The Gaussian model leads to an L2 regularization (we are trying to

minimize the square value of w)

• Another popular regularization is an L1 which tries to minimize |w|

Page 19: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Logistic regression for more

than 2 classes • Logistic regression can be used to classify data from more than 2

classes. Assume we have k classes then:

• for i<k we set

where

And for k we have

)w()();|( T

i

110 XgxwxwwgXiyp dd

iii

dd

iiiik

j

z

z

i xwxwwz

e

ezg

j

i

110

1

1

1

)(

1

1

1

1);|(

k

j

z je

Xkyp

1

1

);|(1);|(k

i

XiypXkyp

Page 20: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Logistic regression for more

than 2 classes • Logistic regression can be used to classify data from more than 2

classes. Assume we have k classes then:

• for i<k we set

where

And for k we have

)w()();|( T

i

110 XgxwxwwgXiyp dd

iii

dd

iiiik

j

z

z

i xwxwwz

e

ezg

j

i

110

1

1

1

)(

1

1

1

1);|(

k

j

z je

Xkyp

1

1

);|(1);|(k

i

XiypXkyp

Binary logistic regression is a

special case of this rule

Page 21: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Update rule for logistic

regression with multiple classes

N

i m

j

ij

m

wXmypyXwlw

iii1

)};|()({)(

Where (yi)=1 if yi=m

and (yi)=0 otherwise

The update rule becomes:

N

i m

j

i

j

m

j

m wXmypyXww iii1

)};|()({

Page 22: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Data transformation

• Similar to what we did with linear regression we can extend logistic

regression to other transformations of the data

• As before, we are free to choose the basis functions

))()(();|1(1

1

11

10 XwXwwgwXyp dd

Page 23: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Important points

• Advantage of logistic regression over linear regression for

classification

• Sigmoid function

• Gradient ascent / descent

• Regularization

• Logistic regression for multiple classes

Page 24: 15-899: Computational Genomics and Systems Biologyepxing/Class/10701/slides/LR15.pdf · e e g z j i ¦ 0 1 1 1 1 1 ( ) ¦ 1 ... 15-899: Computational Genomics and Systems Biology

Logistic regression

• The name comes from the logit transformation:

dd

iii

k

i xwxwwzg

zg

Xkyp

Xiyp

110

)(

)(log

);|(

);|(log