Archive for the ‘conference’ Category

Get Ready to Rummble!

Friday, June 20th, 2008

The very last session of the IFIPTM 2008 conference was a demo session; there were 3 demos run and the one that I liked the most was Rummble is a web site that, much like other web2.0 ideas, has as foundations a social network: the interesting addition, though (and what makes it so appropriate for a conference on trust) is that when you add a friend you can say how much you trust their opinions. You then go on to “rummble” different locations (shops/restaurants/clubs), by rating, tagging, and describing them with a comment. The neat thing is that combining trust, rating similarity, and social distance, the site can then predict how much you will like other places that you have not rummbled, and colours them accordingly. The site is also fully mobile! (more…)


Friday, June 20th, 2008

Fourth International Workshop on the Value of Security through Collaboration (SECOVAL 2008)
part of SECURECOMM’08 in cooperation with ACM and CREATE-NET
September 22nd, Istanbul, Turkey
Submission Deadline: July 10, 2008

Intl Workshop on Trust in Mobile Environments

Wednesday, June 18th, 2008

Yesterday was the first International Workshop on Trust in Mobile Environments (TIME 2008), co-located with the IFIPTM 08 conference in Trondheim, Norway. The workshop merged with the Workshop on Sustaining Privacy in Autonomous Collaborative Environments (SPACE), and consisted of three sessions. Here is a brief summary on what we saw: (more…)

ACM SAC 2009

Thursday, June 5th, 2008


The 24th ACM Symposium on Applied Computing
at the Hilton Hawaiian Village Beach Resort & Spa
Waikiki Beach, Honolulu, Hawaii, USA

Aug. 16, 2008: Full paper submission
Oct. 11, 2008: Author notification
Oct. 25, 2008: Camera-ready copy
Mar. 8-12, 2009: ACM SAC in Hawaii, USA


Conference on Reputation

Wednesday, June 4th, 2008

First International Conference on Reputation

What: Work on Reputation from a multidisciplinary standpoint

Where: Tuscany, Italy

When: March 2009

KDD 2008 Workshops

Tuesday, April 15th, 2008

The 2008 ACM SIGKDD (Knowledge Discovery and Data Mining) has just announced the list of accepted workshops, that are below. More information can be found here.

Full Day Workshops

  • The 2nd International Workshop on Data Mining and Audience Intelligence for Advertising (ADKDD’08)
  • The 9th Intl. Workshop on Multimedia Data Mining
  • WEBKDD’08: 10 Years of Knowledge Discovery on the Web
  • The 2nd International Workshop on Knowledge Discovery from Sensor Data (Sensor-KDD, 2008)
  • The 2nd ACM SIGKDD International Workshop on Privacy, Security, and Trust in KDD (PinKDD’08)
  • The 2nd SNA-KDD Workshop on Social Network Mining and Analysis

Half-day Workshops

  • The 2nd International Workshop on Mining Multiple Information Sources
  • The 2nd KDD Workshop on Large Scale Recommenders Systems and the Netflix Prize
  • Data Mining with Constraints
  • Data Mining using Matrices and Tensors
  • The 8th International Workshop on Data Mining in Bioinformatics (BIOKDD08)
  • Data Mining for Business Applications

RecSys 2008

Monday, March 31st, 2008

The CFP for RecSys 2008 is out: Paper submission is May 18th, 2008.

The doctoral symposium ad has also been circulated:

The Recommender Systems 2008 Doctoral Symposium provides an opportunity for doctoral students to explore and develop their research interests in an interdisciplinary workshop, under the guidance of a panel of distinguished research faculty. We invite students who feel they would benefit from this kind of feedback on their dissertation work to apply for this unique opportunity to share their work with students in a similar situation as well as senior researchers in the field. The strongest candidates will be those who have an idea and an area, and have made some progress, but who are not so far along that they can no longer make changes. Typically, this means they will have made their dissertation proposal, but still be about a year from completion.


Friday, March 21st, 2008

I’m glad to say that the TRECK track of SAC went quite well and did not suffer from some of the things I mentioned in my previous rant. The track was organized by Dr. Jean-Marc Seigneur of the University of Geneva, and the two sessions were chaired by Dr. Virgilio Almeida of the Federal University of Minas Gerais (who I had an interesting discussion with after the track), and was broadly divided into two themes: trust and recommender systems. The trust session had an overall focus on peer-to-peer systems, here are some quick samples:

  • Francesco Santini presented the idea of multitrust, which aims at computing trust in a dynamically created group of trustees who all have different subjective trust values ["Propagating Multitrust Within Trust Networks, " Bistarelli/Santini].
  • Asmaa Adnane presented the application of trust to detecting misbehaviour in link-state routing algorithms. I always wonder how well these cool ideas will work in practice; if information is lost or delayed they will deduce that another node is untrustworthy! ["Autonomic Trust Reasoning Enables Misbehavior Detection in OLSR," Adnane/Timoteo de Sousa/Bidan/Me']
  • The Surework Framework extended the current operation of trust in p2p networks to include the idea of super-peers; nodes with very high reputation can, in fact, become reputation servers. ["Surework: A Super-peer Reputation Framework for p2p Networks," Rodriguez-Perez/Esparza/Munoz]
  • The CAT Model was introduced and explained- it is a model of open and dynamic systems that considers services as contexts.. The 15 minute time-limit was a bit constraining and I’ll have to read the full paper!  ["CAT: A Context-Aware Trust Model for Open and Dynamic Systems" Uddin/Zulkernine/Ahamed]
  • Rowan Martin-Hughes applied a game-theoretic analysis to understand why people would defect in a large-scale open system, like eBay. The analysis was based on a modified version of the Prisoner’s dilemma, which was very interesting; the only question that arises is, as Daniele mentioned, is this appropriate when users may very well behave irrationally? ["Examining the Motivations of Defection in Large-Scale Open Systems," Martin-Hughes/Renz]

The second session focused on recommender systems:

  • Karen Tso-Sutter presented her work on combining user-item tags into the collaborative filtering process. Interestingly, tags did not improve accuracy until the algorithm was already boosted by using both user- and item- based algorithms. ["Tag-Aware Recommender Systems by Fusion of Collaborative Filtering Algorithms," Tso-Sutter/Marinho/Schmidt-Thieme]
  • My work! Looking at the similarity distribution over a graph generated by a nearest-neighbour algorithm. ["The Effect of Correlation Coefficients on Communities of Recommenders," Lathia/Hailes/Capra].
  • Patricia Victor‘s paper discussed an extension to Paolo Massa’s work on trust-aware recommender systems, which concluded that the cold-start problem in recommender systems can be avoided by having users express trust values in other users, which can then be propagated. The problem is: which users should they connect to? The paper has an interesting analysis of the different kind of users in the epinions dataset. ["Whom Should I Trust? The Impact of Key Figures on Cold-Start Recommendations," Victor/Cornelis/Teredesai/De Cock].
  • The last paper veered away from collaborative filtering to look at the role of keywords and taxonomies in content-based recommender systems. The taxonomy vs. folksonomy war continues! ["Comparing Keywords and Taxonomies in the Representation of Users Profiles in a Content-Based Recommender System" Loh/Lorenzi/Simoes/Wives/Oliveira]

The full list of abstracts can be read on the trustcomp-treck web site. If any of the attendees or authors are reading this post: we welcome your thoughts and comments, and officially invite you to contribute to this blog! To write a guest-post about your research, please get in touch! (n.lathia @

No-Shows and PowerPoint Karaoke @ ACM SAC 2008

Wednesday, March 19th, 2008

I’m currently at ACM SAC 2008, in beautiful Fortaleza (Brazil). The conference has a number of tracks; many of them are very interesting and they cover a very broad range of topics. I’ve jumped around through a few different sessions, including CISIA (Computational Logic and Computational Intelligence in Signal and Image Analysis), WT (Web Technologies), IAR (Information Access & Retrieval), SWA (The Semantic Web and Applications), and MMV (Multimedia and Visualisation). The TRECK track, where I am presenting, is on the last day of the conference.

Looking at the titles of the papers to be presented, each session promised to be very exciting; and even if I am no expert in the field I wanted to see what I could get out of it. Unfortunately, in a lot of the sessions I have attended as many as 3/5 of the presenters did not show up, leaving the session chairs in a very embarassing situation. In one case, a session ended early and so I went to the next room. I got there in time to hear the chair ending that session as well due to no-shows: very dissapointing. I’ve seen presenters leaving a session once their bit is done, and not contribute to discussions that they are supposed to be experts in, leaving only the session chairs to ask questions.

To make matters worse, some of the presentations I have seen have been quite poor. They all follow the same formula: title, outline, a bunch of slides where the presenter reads what is on the slide, conclusion where they repeat what they just presented, and future work: power point karaoke at its best. I understand that some may follow this method due to difficulties with English, but they do not even sound interested in their own work. They also immediately dive into the details or cite other papers without explaining why, forgetting that some (like me) may be there to learn something new and need the broad strokes of the picture first. (I’m no expert at presenting; however, there are lots of resources on presenting on Daniele’s page here). It makes attendance at the sessions even more difficult!

Overall, the conference has been organized well, and is running smoothly. The track chairs obviously put a lot of work into this, and it should be an ideal opportunity to mingle with a range of researchers… but the participants are not doing a very good job! I’ll write a separate post about the interesting presentations I have seen.

Turn Ideas into Money

Monday, March 17th, 2008

As the GroupLens research blog is reporting, MyStrands have announced a $100,000 investment for the winner of the recommender startup competition. The winners will be announced at RecSys 2008. On a side note, any UCL-ers interested in entrepreneurship might also be interested in this course run by the UCL graduate school.

Update: it seems that tapping into any wisdom hidden in the masses is the new source of ideas (crowdsourcing): don’t come up with ideas, just make a means for the ideas to come to you. A new competition is adding its name to the Netflix prize, this previous post on evaluating algorithms with the masses, and the above competition: semantihacker is offerring $1 million to anyone who can put their semantic-analysis engine to good use!

Filtering By Trust

Thursday, February 28th, 2008

A lot of the posts on this blog discuss the idea of computers using trust in different scenarios. Since starting as a research student, and reading a lot of good papers about trust, I still find the idea slightly strange: is trust something that can actually be quantified? Are these computers actually reasoning about trust? (what is trust?)

Trust, however, seems to have a place in our research. It gives us a metaphor to describe how interactions between “strangers” should happen, a platform to build applications that feed off these ideas, and a language to describe and analyse the emergent properties that we observe in these systems. So, even if a trust relationship may be a concept that cannot be boiled down to a deterministic protocol, it gives researchers a model of the world they are exploring.

I decided to apply this metaphor to collaborative filtering, an algorithm that aims to generate recommendations by assigning neighbours to each user. Usually this assignment is done by computed similarity, which has its downfalls: how similar are two people who have never rated items in common? What is the best way of measuring similarity? Applying the metaphor of trust, instead, aims at capturing the value that each neighbour gives to a user’s recommendations over time, and value is not only derived from agreement- but also from the ability a neighbour has to provide information. While similarity-based algorithms use neighbour ratings, interactions based on trust acknowledge that information received from others is subject to interpretation, as there may be a strong semantic distance between the way two users apply ratings to items.

Trust is a richer concept than similarity by favouring neighbours who can give you information about the items you seek and offering a means of learning to interpret the information received from them. Evaluating this technique improves on the basic similarity-driven algorithms, both in terms of accuracy and coverage: modeling the problem as an instance of a trust-management scenario seems to offer an escape from traditional pitfalls of similarity-based approaches. I leave all the details to the paper, “Trust-based Collaborative Filtering,” which is due to appear in IFIPTM 2008 (thanks to Daniele for his great comments and feedback).

Workshop on Trust in Mobile Environments

Friday, February 15th, 2008

Following Daniele’s previous post on workshops at iTrust, another workshop is doing its own round of advertisement: the iTrust Workshop on Trust in Mobile Environments. Abstracts are due the 28th of March. Here is a short description:

Trust is a vital issue in mobile computing if applications are to support interactions which will carry data of any significance. Consider, for instance, exploring a market place: which vendors should one prefer, and why; how can a user establish the provenance of an item, etc. Various trust models have been developed in recent years to enable the construction of trust-aware applications. However, it is still not clear how robust these models are, and against what types of attacks; how accurate they are in capturing human characteristics and dynamics of trust; how suitable they are to the mobile setting. Mobility brings in orthogonal complexities to the problem of trust management: for example, the transient relationships with the environment and other users calls for an investigation of the dependency between trust and context; the lack of a clear shared control authority makes it difficult to verify identities, and to follow-up problems later; the limited network capability and ad-hoc connectivity require the investigation of novel protocols for content sharing and dissemination, and so on.

Two great workshops at iTrust

Friday, February 8th, 2008

1) Security and Trust Management (STM).

uk dissertation help

Papers by April 2nd.
The intersection of security and the real world has prompted research in trust management. This research should ideally translate into proposals of solutions to traditional security issues. But, more often than not, it’s all proposals and few solutions. That is why STM focuses on how trust management may practically solve security issues and, in so doing, how it may enable new applications (eg, reputation, recommendation, collaboration in P2P or mobile nets). The call covers a wide range of topics.

2) Combining Context with Trust, Security, and Privacy (CAT). Paper abstracts by March 28th.
A research field might claim to have entered mainstream status only after it has been accepted by established conferences. Context-awareness and trust management have had that honour, but they have had it separately. We know by now how to design context-aware systems and trust management systems, but how to integrate the two is still the province of unexplored territory. That is why CAT will feature intrepid researchers who will stop us from:

  • sitting down in utter apathy towards the issue of trust being context-dependent – if (context=category of trust), as “rock music” is in “I trust you for recommending rock music”.
  • passing over exciting percom applications – if (context=space of interaction) as “my company premises” is in “my PDA is trusted for accessing confidential documents only within my company premises”.

Last year, CAT was terrific – I still remember the informing talks by Maddy, Tyrone and Linda. This year, it is likely to be even better. That is because CAT is like Math – one does context plus trust, and then multiplies by many researchers to equal stimulating discussion ;-)

Workshop on Online Social Networks

Tuesday, December 18th, 2007

There was a workshop at on the impact of social networks on telecommunication networks. Here is the (interesting) .

ICDM 2007

Wednesday, December 5th, 2007

I attended ICDM (a data mining conference) this year. Since I cannot comment on all the papers I’ve found interesting, here is the full program and my comments on very few papers follow ;-)

6 Full papers

1) Improving Text Classification by Using Encyclopedia Knowledge

Existing methods for classifying text do not work well. That is partly because there are many terms that are (semantically) related but do not co-occur in the same documents. To capture the relationships among those terms, one should use a thesaurus. Pu Wand et al. built a huge thesaurus from Wikipedia and showed that classification benefits from its use.

2) Scalable Collaborative Filtering with Jointly Derived Neighborhood Interpolation Weights
The most common form of collaborative filtering consists of three major steps:
(1) data normalization, (2) neighbour selection, and (3) determination of interpolation weights. Bell and Koren showed that different ways of carrying out the 2nd step do not impact the predictive accuracy. They then revisited the remaining two steps – they revisited:
+ the 1st step by removing 10 “global effects” that cause substantial data variability and mask fundamental relationships between ratings,
+ and the 3rd step by computing interpolation weights as a global solution to an optimization problem.
By using these revisions, they considerably improved predictive accuracy, so much so that they won the Netflix Progress Prize.

3) Lightweight Distributed Trust Propagation
Soon individuals will be able to share digital content (eg, photos, videos) using their portable devices in a fully distributed way (without relying on any server). We presented a way with which portable devices distributely select content from reputable sources (as opposed to previous work that focuses on centralized solutions).

4) Analyzing and Detecting Review Spam
Here Jindal and Liu proposed an effective way for detecting spam of product reviews.

5) Co-Ranking Authors and Documents in a Heterogeneous Network
Existing ways of ranking network nodes (eg, PageRank) work on homogeneous networks (networks whose nodes represent the same kind of entity, eg, nodes of a citation network usually represent publications). But most networks are heterogeneous (eg, a citation network may well have nodes that are either publications or authors). To rank nodes of heterogeneous networks, Zhou et al. proposed a way that couples random walks. In a citation network, this translates into two random walks that separately rank authors and publications (rankings of publications and their authors depend on each other in a mutually reinforcing way).

6) Temporal analysis of semantic graphs using ASALSAN
Say we have a large dataset of emails that employees of a company (eg, of Enron) have exchanged. To make sense of that dataset, we may represent it as a (person x person) matrix and decompose that matrix to learn latent features. Decompositions (eg, SVD) usually work on a two-dimensional matrix. But say that we also know WHEN emails have been sent. That is, we have a three-dimensional matrix – (person x person x time) matrix. Bader et al. showed how to decompose 3-dimensional matrices.

1 Short paper
1) Trend Motif: A Graph Mining Approach for Analysis of Dynamic Complex Networks
Jin et al. proposed a way of mining complex networks whose edges have weights that change over time. More specifically, they extract temporal trends – trends of how weights change over time.

2 Workshop Papers
1) Aspect Summarization from Blogsphere for Social Study
Researchers have been able to classify sentiments of blog posts (eg, whether posts contain positive or negative reviews). Chang and Tsai built a system that marks a step forward – the ability to extract opinions from blog posts. In their evaluation, they showed how their system is able to extract pro and con arguments about abortion and gay marriage from real blog posts.

2) SOPS: Stock Prediction using Web Sentiment
To predict stock values, traditional solutions solely rely on past stock performance. To make more informed predictions, Sehgal and Song built a system that scans financial message boards, extracts sentiments expressed in them, and then learns the correlation between sentiment and stock performance. Upon what it learns, the system makes predictions that are more accurate than those of traditional methods.

3 Invited Talks (here are some excerpts from the speakers’ abstracts)
1) Bricolage: Data at Play (pdf)
There are a number of recently created websites (eg, Swivel, Many Eyes, Data 360) that enable people to collaboratively post, visualize, curate and discuss data. These sites “take the premise that communal, free-form play with data can bring to the surface new ideas, new connections, and new kinds of social discourse and understanding. Joseph M. Hellerstein focused on opportunities for data mining technologies to facilitate, inspire, and take advantage of communal play with data.

2) Learning from Society (htm)
Ian Witten illustrated how learning from society in the Web 2.0 world will provide some of the value-added functions of the librarians who have traditionally connected users with the information they need. He also stressed the importance of designing functions that are “open” to the world in contract to the unfathomable “black box” that conceals the inner workings of today’s search engines.

3) Tensor Decompositions and Data Mining (pdf)
Matrix decompositions (such as SVD) are useful (eg, for ranking web pages, for recognizing faces) but are restricted to two-way tabular data. In many cases, it is more natural to arrange data into an N-way hyperrectangle and decompose it by using what is called a tensor decomposition. Tamara Kolda discussed several examples of tensor decompositions being used for hyperlink analysis for web search, computer vision, bibliometric analysis, cross-language document clustering, and dynamic network traffic analysis.