Archive for the ‘crowdsourcing’ Category

Code and other laws of urban space

Friday, October 23rd, 2009

Mobile phones offer more radical possibilities than ‘PC + internet’ in terms of bringing information into the real spatial environment, argues The City Project – which means architects and urban planners need to start engaging with the way space is experienced and manipulated through mobile software. Map-tagging and location-tracking could help planners to understand how space is used, reducing the tension between the ideal space of architecture and the real space of inhabitation.

So if the prophets of user-generated-everything need to learn that space matters, do those who dream of clean, Cartesian space also need to learn that use matters? No doubt – but to reduce location-aware software to a feedback channel from users to developers (in either sense), or to see it as another element in an architectural programme, would be to miss its truly radical potential, which would lie – if sufficiently open platforms could be developed – in enabling the unplanned, disorganised and ever-changing use of space, without architects.

Netflix Prize – Round 2

Monday, September 21st, 2009

The netflix prize winners have been announced, as well as the next $1 million competition. From here:

“The new challenge focuses on predicting the movie preferences of people who rarely or never rate the movies they rent. This will be deduced from more than 100 million data points, including information about renters’ ages, genders, ZIP codes, genre ratings and previously chosen movies.

Instead of a single $1 million prize, this new challenge will be split into one $500,000 award to the team judged to be leading after six months and an additional $500,000 to the team in the lead at the 18-month mark, when the contest is wrapped up.”

Interestingly, our previous discussion on the viability of the winner’s results has now an answer. From here:

The team’s 10 percent achievement will not be immediately incorporated into, said Neil Hunt, chief product officer.

“There are several hundred algorithms that contribute to he overall 10 percent improvement – all blended together,” Hunt said. “In order to make the computation feasible to generate the kinds of volumes of predictions that we needed for a real system – we’ve selected just a small number – two or three of those algorithms for direct implementation.”

crowdsourcing goes mobile – the extraordinaries

Wednesday, July 8th, 2009

We’ve discussed the potential of the Mechanical Turk for social research. Now,  here is a new crowdsourcing service on mobiles – short video (below), description, download the iPhone application. They deliver skills-based volunteer tasks to people whenever and wherever they are available by mobile phone.

Also, yesterday Aaron Shaw “presented upon his research into the potential Amazon’s Mechanical Turk holds for social science and the culture that surrounds it.” (here). “In Shaw’s view, there needs to be

a more serious examination of the question. Experimental evidence of research suggest subpopulations of people who would respond differently. Some people will be motivated by doing good, others don’t care, want the .05. We need better ways to test. It’s situation-specific.

Last point: The use of Mechanical Turk for enterprise search.

Deconstructing “the Twitter revolution”

Tuesday, July 7th, 2009

Hamid Tehrani of Global Voices gives a sober assessment of the role of Twitter in the Iranian election protests. One of the issues he raises is the temptation to relay breaking news without verifying it. The open source Ushahidi project, which was initially developed to aggregate and map reports of violence following the Kenyan elections in 2007/8, has proposed crowdsourced filtering to deal with this problem. However, the question remains, how can the people aggregating and filtering first-hand reports determine what’s true? Does citizen journalism still require a layer of professional editors, experts and fact-checkers, or can all these functions be shared among the crowd?

DemocraBus (crowdsource bus driving): Genius!

Thursday, April 2nd, 2009

Yesterday I was watching Genius and I loved it!!!!!! Members of the public write to the comedian Dave Gorman with their funny ideas. Then Dave gets a guest on the show to decide if the ideas are Genius, or not.”

Brendan put forward a brilliant idea: how to crowdsource bus driving (1 and half minute of madness!) – every passenger has a steering wheel and the direction of the bus is determined by what the majority of passengers tell it to do. Unfortunately, it got the thumbs down

Experts tend to be hedgehogs and aren’t good at predicting

Friday, March 27th, 2009

From today’s NYT “Learning How to Think“:

“The expert on experts is Philip Tetlock, a professor at the University of California, Berkeley. His 2005 book, “Expert Political Judgment” (New Yorker Review),  is based on two decades of tracking some 82,000 predictions by 284 experts. The experts’ forecasts were tracked both on the subjects of their specialties and on subjects that they knew little about.

The result? The predictions of experts were, on average, only a tiny bit better than random guesses — the equivalent of a chimpanzee throwing darts at a board. … The only consistent predictor was fame — and it was an inverse relationship. The more famous experts did worse than unknown ones.”

Idea 1: This result partly explains why crowdsourcing may be more accurate than aggregating expert opinions.

(Project) Idea 2: Since “we trumpet our successes and ignore failures”, we need a system that monitors and evaluates the records of various experts and pundits as a public service

Lesson: “Hedgehogs tend to have a focused worldview, an ideological leaning, strong convictions; foxes are more cautious, more centrist, more likely to adjust their views, more pragmatic, more prone to self-doubt, more inclined to see complexity and nuance. And it turns out that while foxes don’t give great sound-bites, they are far more likely to get things right.”

Crowdsourcing User Studies With Mechanical Turk

Tuesday, February 10th, 2009

We just finished our reading session of “Crowdsourcing User Studies With Mechanical Turk” (pdf). Very interesting paper. Few hand-written notes on which type of tasks we would run on the MechTurk.

The problem with web 2.0 crowds: imitative!

Wednesday, February 4th, 2009

I attended a talk by David Sumpter on “How animal groups make decisions” (hosted by Max Reuter). David is a mathematician working on self-organisation and decision-making based on simple rules. His team looked at behavioral rules that explain, for example, how birds fly together.

My take-away from his talk: Group decision-making may be better than individual decision-making ONLY if each member of the group takes decision indepedently. Indeed, idependence is one of the four elements to form a wise crowd. Alas, I think that social prejudices make it impossible to reach independent decisions in our society. Does this suggest the end of the wisdom of crowds?

Few scribble notes: