Click here to load reader

Handson 2 (6/6)

  • View

  • Download

Embed Size (px)


ISWC 2010: TUTORIAL: Ten Ways to Make your Semantic App Addictive

Text of Handson 2 (6/6)

  • 1. Hands-on experiences with incentives and mechanism design
    Roberta Cuel, University ofTrento, IT; Markus Rohde, University of Siegen, DE andGermn Toro del Valle, Telefonica I+D, ES
    ISWC 2010
  • 2. How to design effective incentives/rules
    Analyze the domain
    • What?
    Working environment
    Job descriptions
    Organization (tasks, hierarchy, compensation, social, communication)
    • How?
    Qualitative face-to-face interviews and questionnaires
    Observations with selected individuals
    Quantitative analysis (data collections)
  • 3. How to design effective incentives/rules (2)
    Identify the preferences and motivations that drive users
    Concentrate on every-day uses for those specific users
    Formalize the existing reward system
    Find yourself in the matrix
    Design the simplest possible solution that can effectively support those uses
    Translate into a small number of alternative testable hypothesis
    Fine tune the rewarding system
  • 4. Fine-tuningincentiveswithmechanism design: a stepbystep procedure
    Mimic situation in the lab
    Set up experiment as close to real life situation as possible
    Run experiment with volunteer subjects, with random allocation of subjects
    Test alternative hypothesis regarding effect of incentive schemes on behavior
    Check differences in outcome
    If happy go to next slide, otherwise re-design hypothesis and run a new trial.
  • 5. Fine-tuning part II
    Start adding realism components:
    Move to real subjects (field test)
    Move to real tasks (with real subjects)
    Move to real subjects handling real tasks
    Move to real situation (field experiment)
    During the process you:
    Lose control over ability to manipulate variables
    Gain awareness of interaction between variables
    Lets look at what we are doing with case studies!
  • 6. Telefonica I+D case study
    Corporate portal
    What is the most obvious incentive from economic point of view?
    What can we do with a small budget to be dedicated to incentivize users?
    How do we know which system is the best for our setting?
  • 7. Basic experiment
    Test Two rewarding/incentives systems
    Pay per click:
    0,03 per tag added (up to 3 maximum).
    Winner takes all model:
    The person who adds the higher number of tags/annotation wins 20
    What would you choose?
    (Participation fee 5 )
  • 8. The experiment (setting)
    36 students
    Random assignment to the two treatments
    Individual task: annotation of images
    Clear set of Instructions
    Training (guided) session to give basic understanding of annotation tool
    8 minutes clocked session (time pressure)
    Goal: produce maximum amount of tags in allotted time on a random set of images
  • 9. The lab
  • 10. The experiment: screenshots
  • 11. 11/6/2010
  • 12. 11/6/2010
  • 13. 11/6/2010
  • 14. 11/6/2010
  • 15. 15
  • 16. Number of tags
    Pay per tag
    Total amount of tags: 901
    Max n. of tags: 78
    N. tags (avg.)= 47.42
    (avg. per person)= 6.66
    (avg. per tag) =0.1404
    total = 126,5 (31,5 flexible compensation)
    Winner takes all model (N=17)
    Total amount of tags: 1067
    Max n. of tags: 96
    N. tags (avg.)= 62.76 (32% increase!)
    (avg. per person)= 6.18
    (avg. per tag)=0.098407
    total = 105 (20 flexible compensation)
  • 17. The results
    T-test and F-test are significant
  • 18. Tags distribution: interface matters!
    Pay per tag
    Tag nature 24 times
    snow 22 times
    green 20 times

    134 tags repeated only 2 times
    437 unique tags
    Winner take all
    Tag green 18 times
    snow 14 times
    butterfly 13 times

    118 tags repeated only 2 times
    390 unique tags
  • 19. Some biases
    Students are
    Volunteers who are used participating in experiments
    Strong web users and game players
    Paid to show up
    Quality of the tags
    Quality of tagging has been controlled for: no obvious mistakesor cheating
  • 20. Summary of results & next lab steps
    Basic hypothesis confirmed
    More work needed:
    Effort directed to producing a good (tags) that are not consumed by users (used to achieve other goals) change structure of the game to let users exploit tagging to achieve results (treasure hunt!)
    Re-run experiment with new structure. Now users produce tags to get money and to use tags to perform more tasks)
  • 21. Next steps: Telefonica I+D
    Replicate experiment with real users
    Main change 1: task becomes relevant in terms of practical usefulness for users
    Main change 2: task has social implications
    Main change 3: expectations change dramatically (workers vs. students 5 Euros to participate???)
    Add realism
    Mimic social structure in the company:
    Run experiment with teammates
    Use real tasks
    Try alternative pay for performance schemes