Artwork Recommenders

A recent email on the user-modeling email list is the Artwork Recommender that you can find here. After answering a short questionnaire, you can rate artwork from 1-5 stars, corresponding to “I hate this artwork” to “I like this artwork very much.” Being no artwork expert, I found my ratings of things to be quite biased towards the high end (3/4 stars) – it’s much more difficult to rate artwork than it is to rate movies or songs! They even have the option of saying that you are not interested in a piece of artwork at all, but I never felt like clicking it. Maybe this highlights the difficulty of finding good recommendations: understanding a rating process that users themselves don’t understand. You can also both give an overall rating for items and also rate a particular item in terms of its attributes (artist/material/style).

The website, though, has a neat interface (watch the paintings you rate “fly” into your profile history) and they are looking for participants to add to their dataset, so if you have 5 minutes then go in and rate some 20 or so paintings, help them, and see what comes out…

4 Responses to “Artwork Recommenders”

  1. “Resnick and Zeckhauser (2000b) have pointed out the so called Pollyanna effect in their study of the eBay reputation reporting system. This effect refers to the disproportionately positive feedbacks from users and rare negative feedbacks.

    Why? Probably, as you said, it might be key to understand what happens WHEN users rate:

    .How good users are in ratings stuff?

    . What motivates users to rate? The paper in the previous post tells us that the way with which users can express ratings (e.g., one-click or full-text reviews) plays a role. Plus, in his paper at iTrust this year, Mark Kramer mentions the self-selection problem: “In NetFlix, the average rating is 3.6/5.0. At Amazon, the average book rating is even higher, 3.9/5.0.” Mark says that this might be explained by self-selection (“Self-selection bias is a classic experimental problem, defined as a false result introduced by having the subjects of an experiment decide for themselves whether or not they will participate.”) And self-selection surely plays a role since, in most reputation systems, users decide whether to rate or not.

  2. Neal Lathia says:

    I suppose the difficult thing is that not rating an item may then imply two separate things: not knowing anything about the item (as we currently assume) or not being interested in rating it (which is information, not a lack of it!)

  3. in most systems, if you are supposed to rate an item, you should know it – eg, after watching a movie, you are asked to rate it. right?

    it never occured to me but ‘not being interested in rating’ is a piece of info indeed. if users are repeadatly “not interested in rating” a movie after watching it, should that affect the final rating of the movie? currently, only stated ratings on the movie contribute to its final rating.

    Finally, if what is actually going on is self-selection, it would be interesting to see how self-selection is avoided in social experiments and whether those solutions could be somewhat applied to collaborative filtering…

  4. From their abstracts, those 2 papers might help to better understand self-selection

    Assessing and Compensating for Self-Selection Bias (Non-Representativeness) of the Family Research Sample
    http://links.jstor.org/sici?sici=0022-2445(199211)54%3A4%3C925%3AAACFSB%3E2.0.CO%3B2-5

    Compensating Differentials and Self-Selection: An Application to Lawyers
    http://links.jstor.org/sici?sici=0022-3808%28198804%2996%3A2%3C411%3ACDASAA%3E2.0.CO%3B2-T&size=LARGE&origin=JSTOR-enlargePage