Diversity and The City: city size, density, and wealth

March 23rd, 2012

In the last few days, I had the opportunity to read quite a lot on social media, ubicomp, architecture, etc.  and found few papers studying the relationship between city density and citizens’ use of social media. So I thought it might be useful to report what Jane Jacobs wrote in her book “The Death and Life of Great American Cities” in 1961:

… it still remains that dense concentrations of people are one of the necessary conditions for flourishing city diversity. And it still follows that in districts where people live, this means there must be a dense concentration of their dwellings on the land preempted for dwellings.

She makes the distinction between density and overcrowding:

Overcrowding within dwellings or rooms, in our country, is almost always a symptom of poverty or of being discriminated against, and it is one (but only one) of many infuriating and discouraging liabilities of being very poor or of being victimized by residential discrimination, or both.

Warning 1: Overcrowding is not necessarily connected to density. Jacobs actually says that overcrowding might be more commonly seen in low-density areas. So number of dwellings per acre might be more informative.

Warning 2: City diversity is different than city wealth. The idea is that

city density contributes to city diversity: low concentration areas (suburbs) can only support businesses of the dominant culture, while cities can afford diverse businesses…

To sum up, the relationship wealth-density is a complicated one :)

Now, a couple of months ago, the bright minds behind The Economist Intelligence Unit published a very nice report  titled Hot Spots. In it, they build a competitiveness index for cities. Toward the end of the report (you can read it here), they add:

Cities of all sizes can be competitive, but density is a factor in the competitiveness of larger cities. The top ten most competitive cities in this ranking range from the world’s biggest (Tokyo’s estimated 36.7m people) to some of its smallest (Zurich’s estimated 1.2m). Indeed, there is no correlation seen between size and competiveness in the Index. While bigger cities offer a greater pool of labour and higher demand, as well as

potential economies of scale, if they are not planned correctly congestion and other issues can actively impede their competitiveness. Urban density is clearly linked to higher productivity: Hong Kong’s efficient density is one reason it performs far better in the Index than, say, Mexico City’s inefficient urban sprawl.

So the conclusion is:

So while size can bring advantages in terms of a city’s overall competitiveness, it will only do so if it is carefully planned. Greater density can help, although this isn’t necessarily the only solution. Overall, however, there is no clear correlation between absolute population size and overall competitiveness

p.s. This is an interesting post on density vs. Jacobs’ density.

p.s. Another interesting post on density vs. livability 

Telcos strategies: Failing, but not yet a failure

March 21st, 2012

This is a very old article on The Economist yet still relevant.

Hewlett-Packard … plans to scatter millions of sensors around the world … It is doing this to increase demand for its hardware, but it also hopes to offer services based on networks of sensors. For instance, a few thousand of them would make it possible to assess the state of health of the Golden Gate bridge in San Francisco, says Stanley Williams, who leads the development of the sensors at HP. “Eventually”, he predicts, “everything will become a service. Apple, though it prides itself on its fancy hardware, is already well on its way towards transforming itself

into a service and data business thanks to the success of its iPhone. …  Much of the innovation in this field may come not from incumbents but from newcomers, and it may happen fastest on such platforms as Pachube.

Likely. However, if one looks at the smart cities agenda, IBM got that a while ago. Telcos who invest billions in R&D haven’t |o|  They have still time to learn from the smarter kids on the block though ;)

- daniele (missing his technology electives at the business school)

sunbelt 2012

March 19th, 2012

Very interesting social network meet up: full of sociologists and full of jokes about physicists – jokes mostly by older folks. Those are my rough notes. I don’t have time to edit them… sorry ;)

- daniele (daniele tweeting)

(warning: i could not cover all presentations i attended).

Strategy and status in online dating (or: How to get a date online). Brilliant (super) brilliant talk by Kevin Lewis. He studied network data from a popular online dating site and looked at questions like “Who sends more or fewer messages to other dating site users? Who receives more or fewer messages? And who is more or less likely to respond to the messages they receive, and to receive responses to the messages they send? “. Very good work with a nice outcome – “gendered status hierarchies that characterize a given social structure”. A must read work.

Happiness as the Duality of Ritual and Belief: Mapping Buddhist Social-Cultural Identities. This was a really good talk! José A Rodríguez and John W Mohr presented their analysis of the process of becoming a Buddhist in a Catholic country (Spain) “by mapping the shared systems of meanings and repertoires of practice in a western Buddhist lay sangha.” They analysed beliefs and of practices by administering a survey with “detailed questions about their life styles, including fine-grained questions about religious beliefs (subsets of beliefs that reflect one’s Buddhist way of feeling and seeing) and the ensemble of practices that a person assembles.”

Concurrent Sex Partner Relations within Sexual Networks of Swingers. Fully packed room! Anne-Marie Niekamp “examined network indicators for the level of concurrency in the sexual ego networks of swingers predicting high potential of STI transmission”. Work with great potential!

Targeting Conflict with Social Network Balance. Ian McCulloh delivered a brilliant presentation in which he proposed “an approach based on balance theory and transitivity to identify non-intuitive centers of gravity to target and eliminate social conflict within an organization. The approach is demonstrated on three examples: a family in divorce, a team of intelligence analysts and information technologists operating in Iraq, and policy implications for the US war in Afghanistan.” He also compared the centrality of the same individuals in two networks centrality(hate net)/centrality(like net) ;)

The “Telefunken Code”: A technique for the collection of anonymized social network data on sensitive topics. This was a brilliantly delivered presentation. Travis Wendel presented a way to “anonymously” collect social network data (which is very important if one is collecting data about as HIV transmission, illicit drug distribution and the functioning of criminal organisations). Using this technique, they interviewed methamphetamine users and distributors in New York City. More specifically: “we asked each participant how many members of the target population’s cell phone numbers he/she knew; if the number

was five or fewer, we elicited information about all of them; if the number was greater, we elicited information about a random sample of five. We asked participants whether each of the last four digits of their (and network members’) cell phone numbers were odd or even, and from 0-4 or from 5-9 (generating the “Telefunken Code”).” In the future, they might ask for more digits, tackle the cognitive difficulty for understanding which number is odd/even, and understand the impact of prepaid (disposable) mobile plans.

To Broker, or Not: The Psychology of Bridge Decay. Eric Gladstone presented a very nice work on how brokers are perceived. They demonstrate that people perceive brokerage roles as relatively more advantageous relative to other network positions. However.. Participants in our studies saw these positions as burdensome, carrying the potential for significant cognitive overload and emotional stress. .. what is necessary to maintain one’s position as a bridge is likely to be detrimental to one’s reputation, which also may explain why the bridge decays over time. ” Here are the findings: 1) non-brokers were rated significantly more trustworthy than brokers; 2) non-brokers were chosen more often than brokers in a game that required trust (investment game); 3) brokers were chosen more often than non-brokers for a trivia game.

Risk, Uncertainty and Tie Strength by Paolo Parigi and Bogdan State. ” Although risk and uncertainty have long been recognized as fundamental factors impacting the formation of social ties, their close study has been confined to the lab. ” To fix that, they analysed data from CouchSurfing.com. They found “that ties formed in either high-risk or high-uncertainty conditions are stronger than those formed in no-risk or low-uncertainty circumstances. Uncertainty is not a significant predictor of tie strength when risk is inexistent, however. Our findings show, likewise, that highly-embedded ties are stronger, and that ties are stronger if they are embedded by stronger ties.”

Navigation in semantic memory. Nicole Beckage presented a very neat work on semantic networks (using florida free association norms [nelson et al. 1999]). she considered words being nodes and edges are directed and weighted based on frequency of response. and she looked at how individuals navigate from a starting location in the semantic network to a target word.

More talks (for hard-core readers) ;)

Read the rest of this entry »

demographics of social media users

February 19th, 2012

@danielequercia

for those of us who do research in linking the online (social media) and offline worlds, it is very important to keep in mind the demographics of different social media services.

In Sept 09, according to Nielsen Claritas [], “the blogging and tweeting community at large isn’t necessarily more affluent, but bloggers and tweeters do live in more urban areas such as New York, Los Angeles, San Francisco, and Chicago.”

african american were more likely to join twitter than other racial groups. hargiatti&litt found out why that was the case, and they did so upon analysing *longitudinal* data (i.e., user cialis 100mg data in 2009 and service (twitter) adoption in 2010). the predictors of twitter adoption in 2010 were:
1) be african american
2) having

web skills (as for 2009)
3) interest in (as for 2009): entertainment/celebrity news (this topic entirely explained the higher level of adoption for african american); science&research&technology&politics&news (negatively correlated. warning: these topics were negatively correlated for the age group under study – which was college students in chicago; for an older cohort, these topics might well matter).

These results are very US-centric, so here are few pointers about UK folks in social media: USA (2010); UKBritan; London.

Tangentially-related:

In UK, we badly need research similar to hargiatti&litt’s. Until now, I only heard  people complaining about data being not representative, but nothing has been done to partly tackle the problem

=====================================

[] The More Affluent and More Urban are More Likely to use Social Networks

[] Eszter Hargittai and Eden Litt. The tweet smell of celebrity success: Explaining variation in Twitter adoption among a diverse group of young adults. 2011

* barriers to internet adoption in US.

Highlights from the Horizon Crowdsourcing for Transport Lab Talk, Nottingham, 25.01.2012

January 26th, 2012


Rob Houghton, Horizon on The Role of Social Media in Transportation

  • Transport providers still use social media primarily for marketing purposes
  • Users signal each other about disruptions and cause back-channel effects for transportation providers
  • Extracting the “rail” lexicon from Twitter to detect backchanneling

Louise Crow, Lead Developer, www.FixMyTransport.com

  • Bother transport providers with transport issues reported by users
  • TfL has embraced the platform and uses it

    to improve its image and instill trust in passengers

  • They began crowdsourcing the relevant contact emails of responsible providers through a Google Spreadsheet

Matt Watkins, Tech Director – Mudlark – www.Chromaroma.com

  • http://vimeo.com/22023369
  • They get permission from users to “scrape” their data from TfL
  • They have permission to do so on some level but TfL is not very happy so they are looking to take the game overground
  • They have lots of TfL data that they are willing to share

Tracy Ross & Chris Parker, Loughborough Design School – “Ideas in Transit”

  • “Grassroot collaboration” – mass collaboration for fixing problems
  • They do feasibility studies in experienced utility and travel behaviour: experience sampling + crowdsourcing
  • Explored the effects of presenting map information vs verbal information to users traveling with public transport and did accessibility studies for disabled travelers
cheap price cialis

[readings] cities, leadership, adoption

December 10th, 2011

Building a Science of Cities by Mike Batty [pdf]

What Are Leaders Really For? by Duncan Watts on HBR [web]

An Experimental Study of Homophily in the Adoption of Health Behavior cheap celebrex online by Damon Centola on Scinece [web]

Happiness: measure of economic & social progress?

May 17th, 2011

This week, The Economist’s debate is…

Motion: “This house believes that new measures of economic and social
progress are

needed for the 21st century economy.

Pro: Richard Layard
“Quality of life, as people experience it, has got to be a key measure of progress and a central objective for any government.” Read more.

Con: Paul Ormerod
“The real danger is the belief that by measuring happiness, it can then be predicted and controlled.”. Read more.

 

Join debate

sentiment analysis

April 12th, 2011

if you are building sentiment analysis algorithms, test the results for the following examples:

  • It is not in doubt that Radiohead’s new album is excellent.
  • Not many albums are as good as the new Radiohead album.
  • Few people would claim that Radiohead’s new album is not excellent.
  • Radiohead’s new album is hardly a successful example of the genre.
  • Radiohead’s new album is anything but good.
  • Nobody considers Radiohead’s new album

    to be good.

To deal with wit, sarcasm, and complex emotions, one should resort to crowdsourcing.

[Bad Science] Mistaking friends for foes

March 30th, 2011

Mistaking friends for foes: An analysis of a social network-based Sybil defense in mobile networksThe long tail doesn’t only hold for

music, songs, and movies but also holds for papers published in Computer Science – few are good (mostly those in tier-one conferences), while most are rubbish. That begs the question of whether CS publishing as it is will perish. I’ll try to answer that question by taking a running example of a really bad paper recently published.

Mistaking friends for foes: An analysis of a social network-based Sybil defense in mobile networks

united colors of social computing

March 5th, 2011

this guy has pasted huge photos of people’s faces on street walls around the world, and he’s done so for a reason – to turn the world inside out! which social media technologies will turn the world inside out? one of the roles of social computing should be to build systems that unite people who are now divided by political, social, and religious differences

what we geeks don't get about social media privacy

March 4th, 2011

Few months ago i worked with cambridge folks on the problem of mobile location privacy and yesterday i presented the resulting paper (with the cool name of spotme). during my presentation, i realised that we computer scientists often ignore the difference of your social media data being public or being publicised, and we do so because technically there isn’t any difference but socially there is a dramatic difference. To see what I mean, consider services that combine pieces of public information shared by individuals from different sources; they are often called data mashups. The problem with data mashups is that they have caused public outcries over privacy issues and are expected to create more problems in the future. To see why, consider that when people willingly share information about themselves, they do so in specific social contexts. When they make a piece of information publicly available, they implicitly guess who is more or less likely to come across that piece of information. When different pieces of public information are integrated together (when they are publicized), the social expectations people had when disclosing the single pieces may be completely disrupted (danah docet). Recent privacy failures are telling stories of disrupted social exceptions. A few years ago, Facebook aggregated content in ways that made it more visible to users who could already access it. When a Facebook user switched to an “it’s complicated” relationship, the user thought that only the few social contacts regularly visiting his profile would notice the change. Suddenly, that was not true anymore. A variety of contacts would learn the switch just from their streams of updates. This change caused a big outcry, but Facebook did not have to back off – the users did. Facebook founder Mark Zuckerberg recently contributed to the discussion and claimed that the rise of social networking online means that people no longer have an expectation of privacy, adding “we decided that these would be the social norms now and we just went for it” (MZ sharing his wisdom). The result is that Facebook “users are now so hooked that they are unlikely to revolt against a gradual loosening of privacy safeguards”. Another example comes from the data mashup performed by the site pleaserobme.com of Twitter (a microbloging service) and Foursquare (a service that lets people publicize their location so their social contacts

can see where they are). This site publishes Foursquare location posts that appear on Twitter. The problem is that, when a user shares her location on Foursquare, the user thinks that only her social contacts on Foursquare or Twitter would notice it. But that has now changed – the site pleaserobme exposes whether users are somewhere other than their home to the entire Internet community, including to burglars. Again, when sharing location data, one has specific social expectations, but those exceptions are disrupted by data mashups. The aim of pleaserobme was not malicious but was to simply make users of location-based services reflect upon whether they are over-sharing. Sharing decisions might be rational in the short term, but they underestimate what might happen to that information as it is remixed and reshared.

- daniele
web @danielequercia

icdm 2010

January 27th, 2011

last month i went to icdm (i gave a talk on rethinking mobile recommendations & neal on personalised public transport). few interesting contributions follow:

// A. Christos Faloutsos (Carnegie Mellon University) gave a keynote talk titled ‘Mining Billion-node Graphs: Patterns, Generators and Tools’. He & his colleagues (=they, henceforth) studied several network measures such as:

  1. node degree. this measure might be useful to answer the question of, for example, whether an epidemic will die or not (one may look at average degree, max degree, degree variance). However, it turns out that only the first eigenvalue of the

    adjacency matrix is needed to understand whether the epidemic will take over or not. [see Prakas on arvix]

  2. network diameter. they found a surprising result: the diameter shrinks over time. this is surprising because the theory (see science papers by barabasi et al.)  predicts the opposite, ie, that the diameter increases, and it does so according to log(n), where n is the number of nodes in the network. in a paper at sdm10, they proposed a way of computing the diameter efficiently and computed the whole’s (Yahoo) Web diameter: of course, its distribution was multi-modal, as one would expect -  there are parts of the Web that are separate by, for example, language
  3. eigenvalues. eigenvalues might be useful for fraud detection (see KDD09 paper on  belief propagation) and for characterising and spotting anomalies in networks (they built a tool for doing that. it’s called Oddball and characterises, eg, egonetwork by studying very simple quantities like number of nodes, number of edges, number of triangles, total weight, and principal eigenvalues)
  4. triangles. in this paper [Tsonakakis ICDM 2008], they have shown that if one has n friends, then he/she would have n^1.6 triangles on average. computing the number of triangles in a network is computationally expensive.  fortunately, they proposed two ways to make this computation tractable:
  • the 1st way is about computing  few top eigenvalues only (lambda_i) and the number of triangles would then be =1/6 of the sum(lamba_i^3)  [Tsonakakis ICDM 2008]
  • the 2nd way relies on SVD (see EigenSpokes at PKDD 2010)

they also studied network-related quantities over time. for instance, they looked at:

  1. popularity of posts over time. they studied the number of links to a post over “lag” days. that is, given a post at time t, they looked after which time t+lag a link to the post would start to appear. what’s the distribution of lag? it’s power law with exponent -1.6
  2. duration of phone calls. this quantity is often used to compute link weights in networks. in their research, they found that phone call duration fits  a  log-normal distribution in a OKish way but is  best described by a newly introduced distribution called TLAC (LAzy Contractor): the longer a call has taken, the longer it will take

// B. [don't remember title] this paper tries to predict age and gender of web-page creators based on the page’s text, title, or structure.

// C. Modeling Information Diffusion in Social Media by Jaewon Yang and Jure Leskovec. the goal of this paper is to look at the process of info diffusion in networks and predict the number of infected nodes at a given point in time without knowing the network structure. to do so, they use a linear influence model that accounts for only which nodes got infected in the past. each node has an influence function that is estimated using past data. a node’s influence function is modelled in discrete time units, so no assumption is made about its shape. more on http://snap.stanford.edu/

//D. MoodCast: Emotion Prediction via Dynamic Continuous Factor Graph Model. i would also briefly check this paper ;-)

Morozov talk at the LSE

January 12th, 2011

Professional cyber-curmudgeon

Evgeny Morozov will be speaking at the LSE on the LIGHTROOM 5 DISCOUT 19th of January.

The war of words

January 5th, 2011

Evgeny Morozov’s latest skeptical article (of many) about the liberatory potential of communication technology describes a new trend in internet censorship: in addition to building national firewalls and knocking websites offline, some governments are trying to win the war of words on the internet by engaging their citizens in debate.

The Chinese government’s practice of paying citizens — the so-called “50 cent party” — to make pro-government points in online debates is well documented, and the propaganda department reported last year that “we have used the Internet to vigorously organize and launch positive propaganda, and actively strengthen our abilities to guide public opinion.” In Russia, Morozov claims that the Kremlin is engaging in “comment warfare” with its opponents. Yet what’s rarely asked in Western criticisms of such developments is how they compare to the operation of media power in our own

societies. To the extent that authoritarian governments are engaging in public debate about their policies, rather than trying to silence such debate, are they not in fact moving towards a different form of governance — one in which winning the argument, rather than preventing the argument, is the foundation of legitimacy?

I’m not of course trying to claim that the internet in Russia or China is some kind of ideal Habermasian public sphere — but neither is the internet in Britain. “Public relations” and “spin” are accepted facts of life in Western politics, though few are honest enough to call them “propaganda”. We accept that a few hands control the news agenda, and that while anyone is in principle free to speak their mind, most will not be heard — yet we regard our society as essentially democratic, and Chinese or Russian society as essentially authoritarian.

I want to argue that such essentialism is mistaken, and that the mode of governance towards which China is rapidly moving — described by Rebecca MacKinnon as networked authoritarianism, and by Min Jiang as authoritarian deliberation — resembles in many ways the society in which we already live. That is not, of course, to say that they are identical; only that in both systems, winning the war of words is of paramount political importance.

This raises a difficult question for those who would like to quantify the democratising influence of communication technology: can we distinguish between propaganda and spin, between “comment warfare” and genuine debate — and if not, what does that mean for our understanding of democracy?

Magazines, marketers, middlemen and micropayments

January 4th, 2011

Fortune Tech is reporting that digital magazine subscriptions are already falling, before the new medium has even managed to reach the mainstream. Some in the publishing world would like to blame the problem on the difficulty of marketing digital magazines when Apple refuses to share customer data with publishers. Such complaints may be put to the test if Google’s prospective subscription service for Android gives publishers access to the customer data they’ve been asking for, as the

Fortune article speculates.

How might the industry evolve if Apple, Google and the like have to compete to attract content creators by offering them customers’ data? This is an issue that goes beyond the new medium of digital magazines and calls into question the role of the device vendor as information gatekeeper.

One interesting possibility is that once creators can contact their customers they’ll deal with them directly, cutting device vendors out of future transactions and reducing their power as gatekeepers – and thus their bargaining power when trying to withold further customer data. Given the potential for companies such as Apple and Google to abuse their position as Holders of the Holy Code-Signing Keys, I see this as a good thing; it might even be a step towards the kind of ‘peer-to-peer economics’ copyright reformers have long advocated, where artists and audiences benefit by cutting out rent-seeking middlemen. But for that to happen, there needs to be an easy way for creators to squeeze the occasional drop of cash from their newly-identified customers.

Perhaps Flattr, a click-based micropayment system, can fill that niche. But I wonder whether we can’t take its core insight – minimising mental transaction costs – even further. Why do I need to click when my device already knows what I’m reading, playing, watching and listening to?

What I want to suggest isn’t a new idea – it’s just a combination of two existing ideas, Flattr and scrobbling. Creators would tag their content to indicate ownership, and the mobile device would keep track of how much time was spent on each creator’s content during the month, dividing a fixed budget among the creators at the end of the month. Mental transaction cost: zero.

The problem, of course, is that if users set their monthly budgets to zero, nobody gets paid. Will creators be willing to risk giving their work away for free in return for circumventing the middlemen and earning 100% of whatever people are willing to pay? If sales of digital magazines are anything to go by, they might not have much to lose…