Author Archive

Morozov talk at the LSE

Wednesday, January 12th, 2011

Professional cyber-curmudgeon

Evgeny Morozov will be speaking at the LSE on the LIGHTROOM 5 DISCOUT 19th of January.

The war of words

Wednesday, January 5th, 2011

Evgeny Morozov’s latest skeptical article (of many) about the liberatory potential of communication technology describes a new trend in internet censorship: in addition to building national firewalls and knocking websites offline, some governments are trying to win the war of words on the internet by engaging their citizens in debate.

The Chinese government’s practice of paying citizens — the so-called “50 cent party” — to make pro-government points in online debates is well documented, and the propaganda department reported last year that “we have used the Internet to vigorously organize and launch positive propaganda, and actively strengthen our abilities to guide public opinion.” In Russia, Morozov claims that the Kremlin is engaging in “comment warfare” with its opponents. Yet what’s rarely asked in Western criticisms of such developments is how they compare to the operation of media power in our own

societies. To the extent that authoritarian governments are engaging in public debate about their policies, rather than trying to silence such debate, are they not in fact moving towards a different form of governance — one in which winning the argument, rather than preventing the argument, is the foundation of legitimacy?

I’m not of course trying to claim that the internet in Russia or China is some kind of ideal Habermasian public sphere — but neither is the internet in Britain. “Public relations” and “spin” are accepted facts of life in Western politics, though few are honest enough to call them “propaganda”. We accept that a few hands control the news agenda, and that while anyone is in principle free to speak their mind, most will not be heard — yet we regard our society as essentially democratic, and Chinese or Russian society as essentially authoritarian.

I want to argue that such essentialism is mistaken, and that the mode of governance towards which China is rapidly moving — described by Rebecca MacKinnon as networked authoritarianism, and by Min Jiang as authoritarian deliberation — resembles in many ways the society in which we already live. That is not, of course, to say that they are identical; only that in both systems, winning the war of words is of paramount political importance.

This raises a difficult question for those who would like to quantify the democratising influence of communication technology: can we distinguish between propaganda and spin, between “comment warfare” and genuine debate — and if not, what does that mean for our understanding of democracy?

Magazines, marketers, middlemen and micropayments

Tuesday, January 4th, 2011

Fortune Tech is reporting that digital magazine subscriptions are already falling, before the new medium has even managed to reach the mainstream. Some in the publishing world would like to blame the problem on the difficulty of marketing digital magazines when Apple refuses to share customer data with publishers. Such complaints may be put to the test if Google’s prospective subscription service for Android gives publishers access to the customer data they’ve been asking for, as the

Fortune article speculates.

How might the industry evolve if Apple, Google and the like have to compete to attract content creators by offering them customers’ data? This is an issue that goes beyond the new medium of digital magazines and calls into question the role of the device vendor as information gatekeeper.

One interesting possibility is that once creators can contact their customers they’ll deal with them directly, cutting device vendors out of future transactions and reducing their power as gatekeepers – and thus their bargaining power when trying to withold further customer data. Given the potential for companies such as Apple and Google to abuse their position as Holders of the Holy Code-Signing Keys, I see this as a good thing; it might even be a step towards the kind of ‘peer-to-peer economics’ copyright reformers have long advocated, where artists and audiences benefit by cutting out rent-seeking middlemen. But for that to happen, there needs to be an easy way for creators to squeeze the occasional drop of cash from their newly-identified customers.

Perhaps Flattr, a click-based micropayment system, can fill that niche. But I wonder whether we can’t take its core insight – minimising mental transaction costs – even further. Why do I need to click when my device already knows what I’m reading, playing, watching and listening to?

What I want to suggest isn’t a new idea – it’s just a combination of two existing ideas, Flattr and scrobbling. Creators would tag their content to indicate ownership, and the mobile device would keep track of how much time was spent on each creator’s content during the month, dividing a fixed budget among the creators at the end of the month. Mental transaction cost: zero.

The problem, of course, is that if users set their monthly budgets to zero, nobody gets paid. Will creators be willing to risk giving their work away for free in return for circumventing the middlemen and earning 100% of whatever people are willing to pay? If sales of digital magazines are anything to go by, they might not have much to lose…

The other America

Friday, July 23rd, 2010

The only sign most people have that they’re passing the headquarters of the world’s largest spy agency is that their GPS

stops working.

NOYFB

It’s with such evocative details that the Washington Post paints its portrait of Top Secret America, an “alternative geography of the United States” that took more than two years of research to assemble from public records. Parts of the project are so crowded with Byzantine bureaucracies, mysterious devices and vast sums of unaccounted money that they read like mediaeval travellers’ tales; others focus on the quotidian, from casino-themed networking nights for employees with Top Secret clearance to the Director of Counterterrorism’s battle to read all his email on the same computer.

It’s perhaps appropriate that this project has, like the maneuvering rival-allies in the cozily named US Intelligence Community, its own double across disciplinary lines — Trevor Paglen has spent eight years exploring and documenting the “black world” in a series of books and exhibitions (the image above is from his Symbology project on the insignia of classified military units). But whereas the Post is hungry for facts, to the point that its investigation ironically suffers from the same information overload it diagnoses in the intelligence world, Paglen’s lens is always focussed on the point where certainty ends and secrecy begins: the holes in the map that modernity promised to sew shut, and that its left hand is now busily unpicking as quickly as its right can close them.

If we had thought our networks, processors and databases would push back the frontiers of ignorance, we were right; what we couldn’t have known was that the territory would be populated by chimeras: that which we know but can’t reveal, that which we know but can’t find, and that which we know but can’t understand.

Google threatens to withdraw from China

Wednesday, January 13th, 2010

Google is threatening to pull out of China because of attacks on its servers aimed at Chinese human rights activists:

“These attacks and the surveillance they have uncovered — combined with the attempts over the past year to further limit free speech on the web — have led us to conclude that we should review the feasibility of our business operations in China. We have decided we are no longer willing to continue censoring our results on Google.cn, and so over the next few weeks we will be discussing with the Chinese government the basis on which we could operate an unfiltered search engine within the law, if at all. We recognize that this may well mean having to shut down Google.cn, and potentially our offices in China.”

(Via The Lowy Interpreter)

Intelligence-gathering by sneakernet

Tuesday, January 5th, 2010

A new report by senior US intelligence officers recommends sweeping changes to intelligence-gathering practices in Afghanistan. The two most interesting recommendations:

  • Intelligence work should be divided along geographic, rather than functional, lines. “The alternative – having all analysts study an entire province or region through the lens of a narrow, functional line (e.g. one analyst covers governance, another studies narcotics trafficking, a third looks at insurgent networks, etc) – isn’t working.” (p4)
  • Analysts should aggregate intelligence by regularly travelling to visit those who collect it. “Information essential to the successful conduct of a counterinsurgency is ripe for retrieval, but analysts that remain confined to restricted-access buildings in Kabul or on Bagram and Kandahar Airfields cannot access it.” (p17) The internet is not suitable for this purpose because “vital information piles up in obscure SharePoint sites, inaccessible hard drives, and other digital junkyards.”

The first point interests me because it suggests that problem-solving doesn’t always scale through specialisation, as tends to be assumed in academia: when the flow of information is constricted, a geographically-organised hierarchy of generalists may be more effective than a taxonomically-organised hierarchy of specialists.

The second point bears more directly on mobblog’s research interests (though I’m not suggesting we should design communication systems for the US military): manual aggregation and curation of information are still necessary, even when that information is in digital form. More surprisingly, the oldest method of aggregation – sneakernet – remains the most reliable.

The issues discussed in the report might seem specific to the chaotic and poorly connected environment of Afghanistan, but I want to argue that the fundamental problem – finding relevant information in a shifting sea of circumstances, practices, organisational structures and data formats – exists everywhere, and is not solved by better connectivity, nor by making everything digital.

David Weinberger has suggested that in the digital realm, tags will replace taxonomies and it will no longer be necessary to separate the organisation of information from its retrieval; but while the notion of a ‘hierarchy of generalists’ does cast doubt on the usefulness of a priori taxonomies, the recommendation of manual data collection and curation is directly opposed to Weinberger’s ‘tag soup’ approach.

Does this simply reflect a lack of tools (or, God help us, standards), or is the complexity of real-world information as irreducible to tags as it is to taxonomies? Anyone who’s used Google Images will recognise the difficulty of applying tags to non-textual data; assuming the sea never stops shifting, will the extraction of relevant knowledge from information always be a matter of – well – intelligence?

Code and other laws of urban space

Friday, October 23rd, 2009

Mobile phones offer more radical possibilities than ‘PC + internet’ in terms of bringing information into the real spatial environment, argues The City Project – which means architects and urban planners need to start engaging with the way space is experienced and manipulated through mobile software. Map-tagging and location-tracking could help planners to understand how space is used, reducing the tension between the ideal space of architecture and the real space of inhabitation.

So if the prophets of user-generated-everything need to learn that space matters, do those who dream of clean, Cartesian space also need to learn that use matters? No doubt – but to reduce location-aware software to a feedback channel from users to developers (in either sense), or to see it as another element in an architectural programme, would be to miss its truly radical potential, which would lie – if sufficiently open platforms could be developed – in enabling the unplanned, disorganised and ever-changing use of space, without architects.

Deconstructing “the Twitter revolution”

Tuesday, July 7th, 2009

Hamid Tehrani of Global Voices gives a sober assessment of the role of Twitter in the Iranian election protests. One of the issues he raises is the temptation to relay breaking news without verifying it. The open source Ushahidi project, which was initially developed to aggregate and map reports of violence following the Kenyan elections in 2007/8, has proposed crowdsourced filtering to deal with this problem. However, the question remains, how can the people aggregating and filtering first-hand reports determine what’s true? Does citizen journalism still require a layer of professional editors, experts and fact-checkers, or can all these functions be shared among the crowd?

‘Ruthlessness gene’ discovered

Monday, April 7th, 2008

Researchers at the Hebrew University in Jerusalem found a link between a gene called AVPR1a and ruthless behaviour in an economic exercise called the ‘Dictator Game’.

New swarming algorithm only tracks 7 neighbours

Thursday, January 31st, 2008

A new study of starling flocks has revealed that each bird only needs to track its nearest six or seven neighbours, regardless of their physical distance, to keep the flock cohesive. Previous models such as boids were based on each bird tracking every other bird within a certain range.

Netflix Prize dataset de-anonymised

Wednesday, December 19th, 2007

Two researchers at the University of Texas have de-anonymised (re-nymised? nymified?) the Netflix Prize dataset.

DelFly: Tiny Robotic Ornithopter Spy

Saturday, November 3rd, 2007

BoingBoing has a video of a tiny camera-carrying ornithopter developed at the Delft University of Technology. The ornithopter has a 35 cm wingspan and can carry a camera and video transmitter for 17 minutes. The next model will have a 10 cm wingspan.

As usual, the researcher “suggests that it could be used to locate victims in collapsed buildings”. If that happens before they’re used for police surveillance or military targetting, I’ll be pleasantly surprised.

Trust propagation and the origins of PageRank

Wednesday, February 7th, 2007

Since Matteo’s seminar about neighbourhood maps a couple of months ago I’ve been wondering whether PageRank could be applied to a local view of a social network to calculate trust scores. (This might be useful in the new darknet version of Freenet, for example.) One of the Freenet developers pointed out that PageRank is patented, but Wikipedia showed that using eigenvector centrality to calculate the importance of nodes isn’t a new idea.

After following a few references it turns out that the idea of propagating trust/status/etc across a graph dates back to at least 1953 [1]. Pinski and Narin [2] suggested normalising each node’s output by dividing the output on each outgoing edge by the node’s outdegree. Geller [3] pointed out that their model was equivalent to a Markov chain: the scores assigned to the nodes followed the Markov chain’s stationary distribution. In other words, propagating trust/status/etc with normalisation at each node is equivalent to taking random walks from random starting points and counting how many times you end up at each node.

The only difference between Geller’s model and PageRank is the damping factor: in PageRank you continue your random walk with probability d or jump to a random node with probability 1-d. (Incidentally, when the algorithm’s described this way rather than in terms of a transition matrix, it’s easy to see how you could implement it on a web spider.)

[1] L. Katz, “A new status index derived from sociometric analysis,” Psychometrika 18 (1953), pp. 39-43. (PDF)
[2] G. Pinski and F. Narin, “Citation influence for journal aggregates of scientific publications: Theory, with application to the literature of physics,” Information Processing and Management 12 (1976), pp. 297-312. (PDF)
[3] N. Geller, “On the citation influence method of Pinski and Narin,” Information Processing and Management 14 (1978), pp. 93-95. (PDF)

Scale-free networks emerging from weighted random graphs

Friday, January 5th, 2007

An alternative explanation for the emergence of scale-free degree distributions: http://polymer.bu.edu/hes/articles/ksbbhs06.pdf

A uniformly random weight is assigned to each edge in a classical random graph. Nodes connected by edges with weights less than the graph’s percolation threshold are collapsed into supernodes. The resulting graph has a power law degree distribution.

BitTyrant: a selfish BitTorrent client that improves performance

Friday, January 5th, 2007

BitTyrant is a BitTorrent client with a novel unchoking algorithm.

Suppose your upload capacity is 50 KBps. If you’ve unchoked 5 peers, existing clients will send each peer 10 KBps, independent of the rate each is sending to you. In contrast, BitTyrant will rank all peers by their receive / sent ratios, preferentially unchoking those peers with high ratios.

During evaluation testing on more than 100 real BitTorrent swarms, BitTyrant provided an average 70% download performance increase when compared to the existing Azureus 2.5 implementation, with some downloads finishing more than three times as quickly.

I wonder how well it performs in swarms of other BitTyrant clients?

The USENIX paper is here.

Update: it seems to be using the same faster than the bear algorithm I came up with last year. Damn it, I should have tried it out. :-)