23
Using the Context of User Feedback in Recommender Systems Ladislav Peška Department of Software Engineering, Charles University in Prague, Czech Republic MEMICS 2016, Telč, Czech Republic Peska: Using the Context of User Feedback in Recommender Systems 1

Using the Context of User Feedback in Recommender Systems

Embed Size (px)

Citation preview

Page 1: Using the Context of User Feedback in Recommender Systems

Using the Context of User Feedback in Recommender

SystemsLadislav Peška

Department of Software Engineering, Charles University in Prague,

Czech Republic

MEMICS 2016, Telč, Czech Republic

Peska: Using the Context of User Feedback in Recommender Systems

1

Page 2: Using the Context of User Feedback in Recommender Systems

Peska: Using the Context of User Feedback in Recommender Systems

2

Recommender Systems Propose relevant items to the right persons at the right time Machine learning application Expose otherwise hard to find, uknown items Complementary to the catalogues, search engines etc. „Win-win strategy“

MEMICS 2016, Telč, Czech Republic

User Feedbackrating, clickstream,time on page, buys…

User, Object ProfilesObject attributes

(Context)Time, location,Possible choices…

RECOMMENDERSYSTEM

Top-K Recommended objects

Page 3: Using the Context of User Feedback in Recommender Systems

Peska: Using the Context of User Feedback in Recommender Systems

3

Recommender Systems User feedback

Explicit feedback (user rating) Implicit feedback (user behavior)

User visited/bought object

Recommending algorithms Collaborative filtering

(Users A and B were similar so far, the should like similar things in the future too)

Cold start problem Content-based filtering

(User A should like similar items to the ones he liked so far) Overspecialization, lack of diversity, obvious recommendations…

MEMICS 2016, Telč, Czech Republic

Page 4: Using the Context of User Feedback in Recommender Systems

Peska: Using the Context of User Feedback in Recommender Systems

4

Challenge Recommending for small e-commerce websites

Tens of similar vendors, user can choose whichever she likes (Almost) no explicit feedback

(No incentives for users) Few visited pages

(Often usage of external search engines & landing on object details) Low user loyalty

(New vs. Returning visitors ratio 80:20)Þ Not enough data for collaborative filtering

Focus on Implicit Feedback & Content-based recommendations Obtain precise implicit user feedback in enough quantity Gather external content to improve CB recommendations (other papers)

MEMICS 2016, Telč, Czech Republic

Page 5: Using the Context of User Feedback in Recommender Systems

User FeedbackExplicit feedback Provided via website GUI

Rating an object via Likert Scale Comparing objects explicitly is

not so common

Missing in small E-Commerces

Implicit feedback Often binary in the literature

User visited object User bought object

MEMICS 2016, Telč, Czech Republic Peska: Using the Context of User Feedback in Recommender Systems

5

A Bor

Page 6: Using the Context of User Feedback in Recommender Systems

User FeedbackExplicit feedback Provided via website GUI

Rating an object via Likert Scale Comparing objects explicitly is

not so common Missing in small E-Commerces

Implicit feedback Often binary in the literature

User visited object User bought object

Virtually any event triggered by user could be feedback

Get better picture about user engagement / preference

MEMICS 2016, Telč, Czech Republic Peska: Using the Context of User Feedback in Recommender Systems

6

Page 7: Using the Context of User Feedback in Recommender Systems

User FeedbackExplicit feedback Provided via website GUI

Rating an object via Likert Scale Comparing objects explicitly is

not so common Missing in small E-Commerces

Implicit feedback Virtually any event could be

used as feedback Tracked via JavaScript

Dwell time Number of page views, Scrolling,

mouse events, copy text, printing Purchase process etc.

Purchases represents fully positive feedback

MEMICS 2016, Telč, Czech Republic Peska: Using the Context of User Feedback in Recommender Systems

7

Page 8: Using the Context of User Feedback in Recommender Systems

User Feedback

Software: Peska, IPIget: The Component for Collecting Implicit User Preference Indicators

MEMICS 2016, Telč, Czech Republic Peska: Using the Context of User Feedback in Recommender Systems

8

Page 9: Using the Context of User Feedback in Recommender Systems

User Feedback – Past Resarch Combine multiple implicit feedback features to estimate user rating

Standard CB / CF recommender systems can be used afterwards

Improvements over the usage of simple implicit feedback

Peska, Vojtas: How to Interpret Implicit User Feedback?Peska, Eckhardt, Vojtas: Preferential Interpretation of Fuzzy Sets in E-shop Recommendation with Real Data Experiments

MEMICS 2016, Telč, Czech Republic Peska: Using the Context of User Feedback in Recommender Systems

9

Dwell time: 10sScrolling: 100pxMouse movement:250px

Dwell time: 100sScrolling: 200pxMouse movement:450px

Rating: 0.2

Rating: 0.8

Page 10: Using the Context of User Feedback in Recommender Systems

User Feedback – Past Resarch Combine multiple implicit feedback features to estimate user rating

Standard CB / CF recommender systems can be used afterwards

Improvements over the usage of simple implicit feedback

Is that the whole picture?MEMICS 2016, Telč, Czech Republic Peska: Using the Context of User Feedback in Recommender Systems

10

Dwell time: 10sScrolling: 100pxMouse movement:250px

Dwell time: 100sScrolling: 200pxMouse movement:450px

Rating: 0.2

Rating: 0.8

Page 11: Using the Context of User Feedback in Recommender Systems

Context of User Feedback Combine multiple implicit feedback features to estimate user rating

Is that the whole picture?

Pages may substantially vary in length, amount of content etc. This could affect perceived implicit feedback features Leveraging context could be important

MEMICS 2016, Telč, Czech Republic Peska: Using the Context of User Feedback in Recommender Systems

11

Dwell time: 10sScrolling: 100pxMouse movement:250px

Dwell time: 100sScrolling: 200pxMouse movement:450px

Rating: 0.2

Rating: 0.8

Page 12: Using the Context of User Feedback in Recommender Systems

Peska: Using the Context of User Feedback in Recommender Systems

12

Context of User Feedback

MEMICS 2016, Telč, Czech Republic

A B

Context of the user Location, Mood, Seasonality... Can affect user preference Out of scope of this paper

Context of device and page Page and browser dimensions Page complexity (amount of text, links, images,...) Device type Datetime Can affect percieved values of the user feedback

Page 13: Using the Context of User Feedback in Recommender Systems

Outline of Our ApproachTraditional recommender User rates a sample of objects

Preference learning computes expected ratings of all objects

Top-k best rated objects are recommended

Our approach Several implicit feedback and contextual

features are collected for each visited object

Learn infered user rating as a probability of purchase, based on

Use contextual features Employ feature engineering

Preference learning computes expected ratings of all objects

Top-k best rated objects are recommended

MEMICS 2016, Telč, Czech Republic Peska: Using the Context of User Feedback in Recommender Systems

13

𝑟𝑢 ,𝑜 :𝑜∈𝑺⊂𝑶 ;𝑟𝑢 ,𝑜∈[0,1]

�̂�𝑢={𝑜1 ,…,𝑜𝑘 }

¿

�̂�𝑢={𝑜1 ,…,𝑜𝑘 }

¿

𝐹 𝑢 ,𝑜 ,𝐶𝑢 ,𝑜→𝑟𝑢 ,𝑜 :𝑜∈𝑺

Page 14: Using the Context of User Feedback in Recommender Systems

Peska: Using the Context of User Feedback in Recommender Systems

14

Outline of Our Approach

MEMICS 2016, Telč, Czech Republic

Page 15: Using the Context of User Feedback in Recommender Systems

Peska: Using the Context of User Feedback in Recommender Systems

15

Collecting User Behavior IPIget component for collecting user

behavior

IPIget component download: http://ksi.mff.cuni.cz/~peska/ipiget.zip

MEMICS 2016, Telč, Czech Republic

Contextual featuresNumber of links

Number of images

Text size

Page dimensions

Browser window dimensions

Visible area ratio

Hand-held device

Implicit Feedback FeaturesView Count

Dwell Time

Mouse Distance

Mouse Time

Scroll Distance

Scroll Time

Purchase

Page 16: Using the Context of User Feedback in Recommender Systems

Peska: Using the Context of User Feedback in Recommender Systems

16

Feature Engineering Relative User Feedback

Deal with specific user bias Contextualized scrolling

Scrolled area , Hit bottom page Relate feedback to the page complexity context

Ratio between feedback (e.g., dwell time) and page complexity feature (e.g., text size).

In total up to 100 input features for the purchase prediction task Evaluate five different sets of features

Binary (rating = 1 for each visited object) Dwell Time only. Raw Feedback, no context. Raw Feedback + Raw Context. All features.

MEMICS 2016, Telč, Czech Republic

Page 17: Using the Context of User Feedback in Recommender Systems

Peska: Using the Context of User Feedback in Recommender Systems

17

Purchase Prediction Task Learning method ,

Purchases represent a golden standard (binary) Can be viewed as both regression and

classification task

MEMICS 2016, Telč, Czech Republic

Regression+ Can predict negative preference- Hard to learn from binary objective

Linear Regression Lasso Regression Boosted Linear Regression

Classification Use purchased class probability as

J48 Decision Tree Boosted Decision Stump

Page 18: Using the Context of User Feedback in Recommender Systems

Peska: Using the Context of User Feedback in Recommender Systems

18

Recommendation Task For user and ratings , return recommended

objects Based on user model, compute rating for all

uknown objects Recommend top-k best rated objects

Vector Space Model (VSM) Content-based algorithm from Information Retrieval domain Recommend objects with similar feature vector to the ones already „rated“ by the user

Popular SimCat Hybrid recommending algorithm Recommend popular objects Recommend objects from already „rated“ categories, or the ones similar to them

Matrix factorization (collaborative) Poor performance on small e-commerces, not shown

MEMICS 2016, Telč, Czech Republic

Page 19: Using the Context of User Feedback in Recommender Systems

Peska: Using the Context of User Feedback in Recommender Systems

19

Evaluation Real data from Czech travel agency users Evaluate contribution of context-based features

Five different sets of features Binary, Dwell time, Raw feedback, Raw+Context, All features

Evaluate Purchase Prediction Task LOOCV applied on users Ranking task - purchased objects should appear on top of the purchase probability list nDCG, recall@top-k

Evaluate Recommendation Task LOOCV applied on (user, purchased object) pair Ranking task - purchased object should appear on top of the recommended objects list nDCG, recall@top-k

MEMICS 2016, Telč, Czech Republic

Page 20: Using the Context of User Feedback in Recommender Systems

Peska: Using the Context of User Feedback in Recommender Systems

20

Purchase Prediction Results

Using contex signifficantly improve purchase prediction in terms of nDCG Similar results for recall@top-k Relating feedback to page complexity directly does not seem to have high effect

Classification-based methods outperforms Regression-based ones

MEMICS 2016, Telč, Czech Republic

Method DwellTime Raw feedback Raw+Context All feedback

LinReg 0.725 0.714 0.834 0.828

Lasso 0.730 0.719 0.831 0.827

Ada LinReg 0.713 0.713 0.863 0.864

J48 0.738 0.740 0.891 0.893

Ada Tree 0.757 0.763 0.950 0.950

Reg

ress

ion

Cla

ssifi

catio

n

Page 21: Using the Context of User Feedback in Recommender Systems

Peska: Using the Context of User Feedback in Recommender Systems

21

Purchase Prediction Results

Improvements over the binary baseline only while using context Mostly statistically signifficant w.r.t. recall@top5 and top10 Not so large improvements as in purchase prediction task

However, more variables involved in the evaluation

Large observed effect of items popularity

MEMICS 2016, Telč, Czech Republic

Method RecSys Binary Dwell Time Raw feedback Raw + Context All

feedbackJ48

VSM 0.3040.299 0.295 0.303 0.311

Ada Tree 0.293 0.298 0.294 0.296J48 Popular

SimCat 0.3620.353 0.354 0.373 0.372

Ada Tree 0.358 0.358 0.363 0.370

Page 22: Using the Context of User Feedback in Recommender Systems

Conclusions, Future WorkKey outcomes Implicit feedback could be more than just a binary variable Observed feedback should be considered with respect to the context of

page and device Doing so could improve the quality of the recommended objects

Future work, Open Problems Better models of context employment and purchase prediction methods

Compare with „the more the better“ rating estimations Further evaluation scenarios

Recommending on the beginning of a new session More refined feedback?

E.g. feedback on object’s attributes? On-line deployment and evaluation

MEMICS 2016, Telč, Czech Republic Peska: Using the Context of User Feedback in Recommender Systems

22

Page 23: Using the Context of User Feedback in Recommender Systems

MEMICS 2016, Telč, Czech Republic Peska: Using the Context of User Feedback in Recommender Systems

23

Thank you!

Questions, comments?

Supplementary materials: http://bit.ly/2dsjg6jSlides: http://www.slideshare.net/LadislavPeska/using-the-context-of-user-feedback-in-recommender-systems