Filtering By Trust

A lot of the posts on this blog discuss the idea of computers using trust in different scenarios. Since starting as a research student, and reading a lot of good papers about trust, I still find the idea slightly strange: is trust something that can actually be quantified? Are these computers actually reasoning about trust? (what is trust?)

Trust, however, seems to have a place in our research. It gives us a metaphor to describe how interactions between “strangers” should happen, a platform to build applications that feed off these ideas, and a language to describe and analyse the emergent properties that we observe in these systems. So, even if a trust relationship may be a concept that cannot be boiled down to a deterministic protocol, it gives researchers a model of the world they are exploring.

I decided to apply this metaphor to collaborative filtering, an algorithm that aims to generate recommendations by assigning neighbours to each user. Usually this assignment is done by computed similarity, which has its downfalls: how similar are two people who have never rated items in common? What is the best way of measuring similarity? Applying the metaphor of trust, instead, aims at capturing the value that each neighbour gives to a user’s recommendations over time, and value is not only derived from agreement- but also from the ability a neighbour has to provide information. While similarity-based algorithms use neighbour ratings, interactions based on trust acknowledge that information received from others is subject to interpretation, as there may be a strong semantic distance between the way two users apply ratings to items.

Trust is a richer concept than similarity by favouring neighbours who can give you information about the items you seek and offering a means of learning to interpret the information received from them. Evaluating this technique improves on the basic similarity-driven algorithms, both in terms of accuracy and coverage: modeling the problem as an instance of a trust-management scenario seems to offer an escape from traditional pitfalls of similarity-based approaches. I leave all the details to the paper, “Trust-based Collaborative Filtering,” which is due to appear in IFIPTM 2008 (thanks to Daniele for his great comments and feedback).

3 Responses to “Filtering By Trust”

  1. Two main interpretations are to view trust as the perceived reliability of something or somebody, called “reliability trust”, and to view trust as a decision to enter into a situation of dependence, called “decision trust”. Both reliability trust and decision trust reflect a positive belief about something on
    which trustor depends for his welfare. Reliability trust is most naturally measured as a discrete or continuous degree of reliability, whereas decision trust is most naturally measured in terms of a binary decision. While most trust and reputation models assume reliability trust, decision trust can also modelled. Systems based on decision trust models should be considered as decision making tools.

    The difficulty of capturing the notion of trust in formal models in a meaningful way has led some economists to reject it as a computational concept. The strongest expression for this view has been given by Williamson (Calculativeness, Trust and Economic Organization. Journal of Law and Economics, 36:453-486, April 1993.) who argues that the notion of trust should be avoided when modelling economic interactions, because it adds nothing new, and that well studied notions such as reliability, utility and risk are adequate and sufficient for that purpose. Personal trust is the only type of trust that can be meaningful for describing interactions, according to Williamson. He argues that personal trust applies to emotional and personal interactions such as love relationships where mutual performance is not always monitored and where failures are forgiven rather than sanctioned. In that sense, traditional computational models would be inadequate e.g. because of insuffcient data and inadequate sanctioning, but also because it would be detrimental to the relationships if the involved parties were to take a computational approach. Non-computation models for trust can be meaningful for studying such relationships according to Williamson, but developing such models should be done within the domains of sociology and psychology, rather than in economy.

    In the light of Williamson’s view on modelling trust it becomes important to judge the purpose and merit of online trust management itself. Can trust management add anything new and valuable to the Internet technology and economy? The answer, in our opinion, is definitely yes. The value of trust management lies in the architectures and mechanisms for collecting trust relevant information, for efficient, reliable and secure processing, for distribution of derived trust and reputation scores, and for taking this information into account when navigating the Internet and making decisions about online activities and transactions. Economic models for risk taking and decision making are abstract and do not address how to build trust networks and reputation systems. Trust management specifically addresses how to build such systems, and can in addition include aspects of economic modelling whenever relevant and useful.

    It can be noted that the traditional cues of trust and reputation that we are used to observe and depend on in the physical world are missing in online environments. Electronic substitutes are therefore needed when designing online trust and reputation
    systems. Furthermore, communicating and sharing information related to trust and reputation is relatively difficult, and normally constrained to local communities in the physical world, whereas IT systems combined with the Internet can be leveraged to design
    extremely efficient systems for exchanging and collecting such information on a global scale.

  2. [...] are a number of names appearing that merge recommendations with social interactions – away from neat algorithms  and towards human-driven reviews and recommendations. Names like Reevoo, Boxedup, LouderVoice, [...]

  3. [...] user’s mean from each rating, do an average- weighting each rating by similarity (or trust!)- and then add your own mean to that. So neighbours who have not rated the item in question do not [...]