A new report by senior US intelligence officers recommends sweeping changes to intelligence-gathering practices in Afghanistan. The two most interesting recommendations:
- Intelligence work should be divided along geographic, rather than functional, lines. “The alternative – having all analysts study an entire province or region through the lens of a narrow, functional line (e.g. one analyst covers governance, another studies narcotics trafficking, a third looks at insurgent networks, etc) – isn’t working.” (p4)
- Analysts should aggregate intelligence by regularly travelling to visit those who collect it. “Information essential to the successful conduct of a counterinsurgency is ripe for retrieval, but analysts that remain confined to restricted-access buildings in Kabul or on Bagram and Kandahar Airfields cannot access it.” (p17) The internet is not suitable for this purpose because “vital information piles up in obscure SharePoint sites, inaccessible hard drives, and other digital junkyards.”
The first point interests me because it suggests that problem-solving doesn’t always scale through specialisation, as tends to be assumed in academia: when the flow of information is constricted, a geographically-organised hierarchy of generalists may be more effective than a taxonomically-organised hierarchy of specialists.
The second point bears more directly on mobblog’s research interests (though I’m not suggesting we should design communication systems for the US military): manual aggregation and curation of information are still necessary, even when that information is in digital form. More surprisingly, the oldest method of aggregation – sneakernet – remains the most reliable.
The issues discussed in the report might seem specific to the chaotic and poorly connected environment of Afghanistan, but I want to argue that the fundamental problem – finding relevant information in a shifting sea of circumstances, practices, organisational structures and data formats – exists everywhere, and is not solved by better connectivity, nor by making everything digital.
David Weinberger has suggested that in the digital realm, tags will replace taxonomies and it will no longer be necessary to separate the organisation of information from its retrieval; but while the notion of a ‘hierarchy of generalists’ does cast doubt on the usefulness of a priori taxonomies, the recommendation of manual data collection and curation is directly opposed to Weinberger’s ‘tag soup’ approach.
Does this simply reflect a lack of tools (or, God help us, standards), or is the complexity of real-world information as irreducible to tags as it is to taxonomies? Anyone who’s used Google Images will recognise the difficulty of applying tags to non-textual data; assuming the sea never stops shifting, will the extraction of relevant knowledge from information always be a matter of – well – intelligence?