Applying reputation to science

I was browsing old posts of MobBlog the other day, and ran into Daniele’s post on (double) blind review. I have to say I’ve yet to see a conference in my field which genuinely benefited from double blind review, and despite the anecdotes I’ve heard of double blind being statistically better for female authors than regular review, I’m starting to think we possibly need to move for the exact opposite soon.

Why’s that? First off, in the context I’ve waded in, leaving off author information from the front page is a joke: people write about named systems that easily identify the research group to the cliques, if the self-citations weren’t enough by themselves. (If you can write a truly standalone conference paper on inter-enterprise collaboration, your sub-problem is too small. ;))

Second, it looks to me that much of the time the cliques actually form around narrow sets of methodology or sub-problems that are familiar to the reviewers and therefore promotion-worthy, while the rest takes actual effort to understand and is therefore suspect at best: yet to become established, a new school of thought just needs to set up its own forum or few. Even if the local gurus can throw anything on paper and it gets accepted, your main problem as a submitter is that your paper won’t get through because it speaks of the wrong things: you’re the guy bringing an axe to the hammersmith convention.

But most importantly, as they grow, anonymous societies degrade asymptotically towards a point of non-cooperation that we’ll call “4chan”, a completely theoretical construct. Check out the fun graph on page 790 (6) of The nature of human altruism. Assume for a moment that the primary purpose of peer review is to weed out actually bad science, and not to reject X % of papers for the sake of rejection rates (and we really do go as far as explicitly setting minimums for the rates these days; content is clearly overrated). If rejection is the punishment, the second-order punishment becomes punishing bad/unfair reviewers. Without any review, we can sustain a community the size of a minor research group. With anonymous reviewers operating without actual reviewer meetings to enforce culture transfer, behind the cover of electronic submission sites, we can keep up to a few dozen researchers in check before lazy defectors ruin it for everyone. But with the bloated mass of researchers wildly hammering away these days, we only have two options: splitting into cliques small enough that anonymity’s not a problem, or punishing unfair reviewers with bad rep in the hopes of scaling up on community size.

Wild delusions aside, peer review and impact measurements suffer from being methods for career advancement rather than science advancement. When was the last time you wrote a paper that participated in actual scientific discussion rather than being yet another monologue written for parrot points? Cast aside your established field, for it can only offer you infinite minor tweaking! Be wild, be free, write good reviews on emerging fields! You’ll never get tenure, but you’ll have a lot more fun while it lasts!

One Response to “Applying reputation to science”

  1. And a fun paper brought to my attention by Daniele: http://doi.acm.org/10.1145/1435417.1435430 : Scaling the academic publication process to Internet scale, in CACM Jan 2009.