Archive for the ‘web 2.0’ Category

The ladder of fame: Few tyrants at the top

Thursday, July 3rd, 2008

To write down a decent research statement (one showing a “vision”), I turned into a McKinsey research analyst these days – I’m reading far more McKinsey Quarterly reports than academic papers, and they aren’t that bad! ;-) In a report that dates back to Aug 07, the authors surveyed 573 users of 4 leading video-sharing websites in Germany and found out:


IYOUIT: mobile service to share personal experiences on the go

Thursday, June 26th, 2008

Officially released today: “[...] IYOUIT allows for an instant automated sharing of personal experiences within communities online. [...] The cutting edge of IYOUIT is in how information about and around the mobile user is automatically collected, analyzed and enriched for an enhanced user experienced and extra value to Web2.0 services. [...] IYOUIT is based on its own framework of software components to host various services and data sources. Framework components, for instance, track the positions of users via GPS and cellular information and identify places of interest over time by learning form their past behavior. Sharing your life with IYOUIT is easy! In the same way that you can communicate experiences to others, IYOUIT provides you with an easy access to the whereabouts of your buddies, informs you about local weather conditions and uploads photos you take and sounds you record. And if you come across an interesting book (or other products), simply take a picture of the ISBN code or the product ID with your phone, and IYOUIT will fill in the blanks for instant exchange with your friends. IYOUIT also records scanned Bluetooth or WLAN beacons and aggregates all data mentioned before into a wealth of context information that you may share with others worldwide on the Web and on the mobile phone.”

More info and free downloads @

IFIPTM Monday workshops (CAT, W2Trust)

Monday, June 23rd, 2008

The Monday workshop sessions of IFIPTM 2008 were a combination of the second workshop on Context-awareness and trust (CAT) and first workshop on Web 2.0 trust (W2Trust). See the W2Trust website for the full list of papers. In this post, we summarize what we saw.


Get Ready to Rummble!

Friday, June 20th, 2008

The very last session of the IFIPTM 2008 conference was a demo session; there were 3 demos run and the one that I liked the most was Rummble is a web site that, much like other web2.0 ideas, has as foundations a social network: the interesting addition, though (and what makes it so appropriate for a conference on trust) is that when you add a friend you can say how much you trust their opinions. You then go on to “rummble” different locations (shops/restaurants/clubs), by rating, tagging, and describing them with a comment. The neat thing is that combining trust, rating similarity, and social distance, the site can then predict how much you will like other places that you have not rummbled, and colours them accordingly. The site is also fully mobile! (more…)

On Twitter’s business model

Thursday, June 5th, 2008

Interesting article in today’s Guardian. “Twitter users are increasingly inured to outages. … Why? … every new tweet (a single message) gets written on to one MySQL database; that is then replicated to multiple slave databases, from which all the “reads” are taken” (more).

What Twitter needs is to expand its capacity while making money from those who are using it. … it needs to deter some people from using it – while benefiting from those who continue to. There are two obvious ways forward. Charge the users, or charge those who want to get at the users. The first option is fine – if it wants to lose 90% of its user base (the rough tradeoff any service sees if it begins charging, however little). The second option might look puzzling, but it has worked before, in the MP3 market.

The future of Mobile Web 2.0

Wednesday, May 28th, 2008

Interesting forecast by Juniper Research. The global market for Mobile Web 2.0 will be worth $22.4bn in 2013. Here is their short whitepaperShare, Collaborate, Exploit ~ Defining Mobile Web 2.0

The Culture of the Amateur

Thursday, April 10th, 2008

If you are running particularly long experiments like me, or are looking for something to watch for 45 minutes, then I suggest this video on youtube: a documentary about truth and wikipedia. It features interviews with big pro- and anti- web 2.0 names, and discusses the extent to which sites like wikipedia encourage truth, freedom, and democracy (or mob-rule, lies, and social fragmentation).

Researching new mobile applications

Saturday, March 15th, 2008

“With the new iPhone SDK, it’s just a matter of time before we see a wave of new Web 2.0 applications.” Here are 12 Future Apps For Your iPhone (which may well inspire our  research agenda):

1. Reality Tagging
2. People Tagging
3. Reality Recognition
4. Physical Social Networks
5. Personalized Travel Guides
6. Digital and Physical Treasure Hunt
7. Distributed Mobile Games
8. Credit Card and Biometrics as Software
9. Paperless Receipts & Digital Business Cards
10. Medical records as Software
11. Physical Browsing & Digital Shopping
12. Location/time-based deals

P2P Lending (good web 2.0)

Saturday, March 8th, 2008

Also known as Person-to-person lending. Borrowers and lenders come together directly on the web and do not need banks. Thanks to Mo, I recently found out that that P2P lending is “working its way into the charitable sector: puts potential “social investors” in touch with small businesses in the developing world, which promise to send e-mail updates on how the business is developing.” Here is how it works (see this video presentation for more): (more…)

Social Graph API by Google

Monday, February 4th, 2008

The new Social Graph API “makes information about the public connections between people on the Web easily available and useful. You can make it easy for users to bring their existing social connections into a new website and as a result, users will spend less time rebuilding their social networks and more time giving your app the love it deserves”.

API docs & Google’s post.

ICDM 2007

Wednesday, December 5th, 2007

I attended ICDM (a data mining conference) this year. Since I cannot comment on all the papers I’ve found interesting, here is the full program and my comments on very few papers follow ;-)

6 Full papers

1) Improving Text Classification by Using Encyclopedia Knowledge

Existing methods for classifying text do not work well. That is partly because there are many terms that are (semantically) related but do not co-occur in the same documents. To capture the relationships among those terms, one should use a thesaurus. Pu Wand et al. built a huge thesaurus from Wikipedia and showed that classification benefits from its use.

2) Scalable Collaborative Filtering with Jointly Derived Neighborhood Interpolation Weights
The most common form of collaborative filtering consists of three major steps:
(1) data normalization, (2) neighbour selection, and (3) determination of interpolation weights. Bell and Koren showed that different ways of carrying out the 2nd step do not impact the predictive accuracy. They then revisited the remaining two steps – they revisited:
+ the 1st step by removing 10 “global effects” that cause substantial data variability and mask fundamental relationships between ratings,
+ and the 3rd step by computing interpolation weights as a global solution to an optimization problem.
By using these revisions, they considerably improved predictive accuracy, so much so that they won the Netflix Progress Prize.

3) Lightweight Distributed Trust Propagation
Soon individuals will be able to share digital content (eg, photos, videos) using their portable devices in a fully distributed way (without relying on any server). We presented a way with which portable devices distributely select content from reputable sources (as opposed to previous work that focuses on centralized solutions).

4) Analyzing and Detecting Review Spam
Here Jindal and Liu proposed an effective way for detecting spam of product reviews.

5) Co-Ranking Authors and Documents in a Heterogeneous Network
Existing ways of ranking network nodes (eg, PageRank) work on homogeneous networks (networks whose nodes represent the same kind of entity, eg, nodes of a citation network usually represent publications). But most networks are heterogeneous (eg, a citation network may well have nodes that are either publications or authors). To rank nodes of heterogeneous networks, Zhou et al. proposed a way that couples random walks. In a citation network, this translates into two random walks that separately rank authors and publications (rankings of publications and their authors depend on each other in a mutually reinforcing way).

6) Temporal analysis of semantic graphs using ASALSAN
Say we have a large dataset of emails that employees of a company (eg, of Enron) have exchanged. To make sense of that dataset, we may represent it as a (person x person) matrix and decompose that matrix to learn latent features. Decompositions (eg, SVD) usually work on a two-dimensional matrix. But say that we also know WHEN emails have been sent. That is, we have a three-dimensional matrix – (person x person x time) matrix. Bader et al. showed how to decompose 3-dimensional matrices.

1 Short paper
1) Trend Motif: A Graph Mining Approach for Analysis of Dynamic Complex Networks
Jin et al. proposed a way of mining complex networks whose edges have weights that change over time. More specifically, they extract temporal trends – trends of how weights change over time.

2 Workshop Papers
1) Aspect Summarization from Blogsphere for Social Study
Researchers have been able to classify sentiments of blog posts (eg, whether posts contain positive or negative reviews). Chang and Tsai built a system that marks a step forward – the ability to extract opinions from blog posts. In their evaluation, they showed how their system is able to extract pro and con arguments about abortion and gay marriage from real blog posts.

2) SOPS: Stock Prediction using Web Sentiment
To predict stock values, traditional solutions solely rely on past stock performance. To make more informed predictions, Sehgal and Song built a system that scans financial message boards, extracts sentiments expressed in them, and then learns the correlation between sentiment and stock performance. Upon what it learns, the system makes predictions that are more accurate than those of traditional methods.

3 Invited Talks (here are some excerpts from the speakers’ abstracts)
1) Bricolage: Data at Play (pdf)
There are a number of recently created websites (eg, Swivel, Many Eyes, Data 360) that enable people to collaboratively post, visualize, curate and discuss data. These sites “take the premise that communal, free-form play with data can bring to the surface new ideas, new connections, and new kinds of social discourse and understanding. Joseph M. Hellerstein focused on opportunities for data mining technologies to facilitate, inspire, and take advantage of communal play with data.

2) Learning from Society (htm)
Ian Witten illustrated how learning from society in the Web 2.0 world will provide some of the value-added functions of the librarians who have traditionally connected users with the information they need. He also stressed the importance of designing functions that are “open” to the world in contract to the unfathomable “black box” that conceals the inner workings of today’s search engines.

3) Tensor Decompositions and Data Mining (pdf)
Matrix decompositions (such as SVD) are useful (eg, for ranking web pages, for recognizing faces) but are restricted to two-way tabular data. In many cases, it is more natural to arrange data into an N-way hyperrectangle and decompose it by using what is called a tensor decomposition. Tamara Kolda discussed several examples of tensor decompositions being used for hyperlink analysis for web search, computer vision, bibliometric analysis, cross-language document clustering, and dynamic network traffic analysis.

Whatsoever u say, i trust u because u r my friend

Thursday, November 29th, 2007

Say that in the near future you will be able to post a question on social network sites. You will get many different answers (whose quality may vary). It would be nice if you could get a list of answers ranked by quality.

Problem: how to rank answers? That’s a cool problem that is disappointingly hard to solve.

Some would argue for using social networks – people close to you should be trusted and they can surely answer any type of question. For me, that’s hard to believe. That is probably because my friends have diverging interests (a few know about technology or finance, a handful of them knows about literature, and many about architecture and design – I have a weakness for creative people). And they also think differently – I’m fortunate enough to have few friends who are left-brainers, while most of them go for the “right” side. If many people would go along with my personal experience, then we could deem those solutions oversimplified at best.

So how to go about the initial problem? For a start, I would acknowlege that:
a) “X befriends Y” and “X trusts Y” are two totally different concepts. I’m overemphasizing by saying “totally”: after all, there may be a correlation between the two concepts; but it is difficult to buy into the causation arrow “I befriend you” -> “I trust you”. Therefore, that distinction may be important (if not crucial), and we usually underemphasize it “for simplicity’s sake”.
b) “X trusts Y” has little meaning. Since trust is context dependent (1, 2, 3), one needs to specify for what X trusts Y. X may trust Y for academic tips but not for real-world issues ;-). So a better way could be “X trusts Y for doing Z”, and that Z would be crucial.

Given these two points, I really like this recent paper. The authors separate social networks and webs of trust (which they call vote-on networks), and they are planning to build around context-specific webs of trust. Great work!

Mobile Web 2.0 in London

Friday, July 27th, 2007

The first European conference on Mobile Web 2.0 will be in London (!) on the 18th and 19th of September. It will gather some of the best thinkers and doers. Agenda. More info here.

MobileMonday in BCN: Mobile Web 2.0

Tuesday, July 3rd, 2007

Yesterday I was at the MobileMonday Barcelona event. Three speakers talked about ‘Mobile Web 2.0′:

Patrick Lord introduced his company’s main application – MobiLuck. It’s a short range wireless messaging application. MobiLuck allows you to detect all Bluetooth devices around you and to store a mini-profile in your Bluetooth name. The mini-profile can include for example your nickname, gender, age and phone number (dating profile) or your first name, last name, company and phone number (business profile). Your mini-profile is detected instantly by other MobiLuck users. You can then send a message (or your picture or your business card) with your mobile in a few clicks, for free, with no need of the recipient’s phone number.

Lucia Garate (of Vodafone R&D, Madrid) presented Vodafone Betavine that aims to help developers create new and innovative services using mobile communications APIs.

Ajit Jaokar (London) listed some of the concepts behind its latest book (Mobile 2.0).

Similar events are offered in London.