When recommendation go bad

Preview:

Citation preview

When Recom m endat ion Syst em s Go Bad

Evan Est ola 12/ 12/ 16

About Me

Evan Estola

Staff Machine Learning Engineer, Data Team Lead @ Meetup

evan@meetup.com

@estola

We m ake com m unit y

real

268,000 Meetup Groups

28.8 Million Members

180 Countries

We w ant a w orld full of real, local com m unit y. W om e n’s V e te rans M e e tup, S an A ntonio , TX

Why Recs at Meet up are Hard

Cold Start

Sparsity

Lies

Recom m endat ion Syst em s: Collaborat ive Filt ering

Recom m endat ion Syst em s: Rat ing Predict ion

Netflix prize

How many stars would user X give movie Y

Boring

Recom m endat ion Syst em s: Learning To Rank

Active area of research

Use ML model to solve a ranking problem

Pointwise: Logistic Regression on binary label, use output for ranking

Listwise: Optimize entire list

Performance Metrics

Mean Average Precision

P@K

Discounted Cumulative Gain

Dat a Science im pact s

lives

Ads you see

Friend’s Activity/Facebook feed

News you’re exposed to

If a product is available

If you can get a ride

Price you pay for things

Admittance into college

Job openings you find

Job openings you can get

If you can get a loan

You just w ant ed a k it chen scale, now Am azon t hinks you’re a drug dealer

Ego

Member/customer/user first

Focus on building the best product,

not on being the most clever data

scientist

Much harder to spin a positive user

story than a story about how smart

you are

“Black-sounding” names 25% more

likely to be served ad suggesting

criminal record

Et hics

We have accepted that Machine Learning

can seem creepy, how do we prevent it

from becoming immoral?

We have an ethical obligation to not

teach machines to be prejudiced.

Dat a Et hics

Aw areness

Talk about it!

Identify groups that could be negatively

impacted by your work

Make a choice Take a stand

Int erpret ab le Models

For simple problems, simple solutions

are often worth a small concession

in performance

Inspectable models make it easier to

debug problems in data collection,

feature engineering etc.

Only include features that work the

way you want

Don’t include feature interactions that

you don’t want

Logist ic Regression

StraightDistanceFeature(-0.0311f),

ChapterZipScore(0.0250f),

RsvpCountFeature(0.0207f),

AgeUnmatchFeature(-1.5876f),

GenderUnmatchFeature(-3.0459f),

StateMatchFeature(0.4931f),

CountryMatchFeature(0.5735f),

FacebookFriendsFeature(1.9617f),

SecondDegreeFacebookFriendsFeature(0.1594f),

ApproxAgeUnmatchFeature(-0.2986f),

SensitiveUnmatchFeature(-0.1937f),

KeywordTopicScoreFeatureNoSuppressed(4.2432f),

TopicScoreBucketFeatureNoSuppressed(1.4469f,0.257f,10f),

TopicScoreBucketFeatureSuppressed(0.2595f,0.099f,10f),

ExtendedTopicsBucketFeatureNoSuppressed(1.6203f,1.091f,10f),

ChapterRelatedTopicsBucketFeatureNoSuppressed(0.1702f,0.252f,0.641f),

ChapterRelatedTopicsBucketFeatureNoSuppressed(0.4983f,0.641f,10f),

DoneChapterTopicsFeatureNoSuppressed(3.3367f)

Feat ure Engineering and Int eract ions

● Good Feature: ○ Join! You’re interested in Tech x Meetup is about Tech

● Good Feature: ○ Don’t join! Group is intended only for Women x You are a Man

● Bad Feature: ○ Don’t join! Group is mostly Men x You are a Woman

● Horrible Feature: ○ Don’t join! Meetup is about Tech x You are a Woman

Meetup is not interested in propagating gender stereotypes

Ensem ble Models and

Dat a segregat ion

Ensemble Models: Combine outputs of

several classifiers for increased accuracy

If you have features that are useful but

you’re worried about interaction (and

your model does it automatically) use

ensemble modeling to restrict the

features to separate models.

Ensem ble Model, Dat a Segregat ion

Data: *Interests Searches Friends Location

Data: *Gender Friends Location

Data: Model1 Prediction Model2 Prediction

Model1 Prediction

Model2 Prediction

Final Prediction

Fake profiles, track ads

Career coaching for “200k+” Executive

jobs Ad

Male group: 1852 impressions

Female group: 318

Diversit y Cont rolled Test ing

CMU - AdFisher

Crawls ads with simulated user profiles

Same technique can work to find bias in your own models!

Generate Test Data

Randomize sensitive feature in real data set

Run Model

Evaluate for unacceptable biased treatment

Must identify what features are sensitive and what outcomes are

unwanted

● Tw it t er bot ● “Garbage in,

garbage out ” ● Responsib ilit y?

“In the span of 15 hours Tay referred to feminism as a

"cult" and a "cancer," as well as noting "gender equality

= feminism" and "i love feminism now." Tweeting

"Bruce Jenner" at the bot got similar mixed response,

ranging from "caitlyn jenner is a hero & is a stunning,

beautiful woman!" to the transphobic "caitlyn jenner

isn't a real woman yet she won woman of the year?"”

Tay.ai

Diverse t est dat a

Outliers can matter

The real world is messy

Some people will mess with you

Some people look/act different than

you

Defense

Diversity Design

You know racist com put ers are a bad idea

Don’t let your com pany invent

racist com put ers

@estola