PROFESSOR Brian Wansink was a shining star of research into the behavioural aspects of healthy and unhealthy food consumption.
Head of Cornell University’s Food and Brand Lab, he published hundreds of academic papers, was a US Presidential adviser on dietary guidelines, and made frequent media appearances to talk about his findings on everything from the size of popcorn buckets to all-you-can-eat buffets.
And then, late in 2018, it all came crashing down.
Wansink’s own descriptions of some of his methods – including the repeated slicing and dicing of old datasets to identify supposed patterns after the fact – led other researchers to start critiquing his published work.
Cornell instituted an inquiry and, in September last year, issued a statement saying Wansink had committed “academic misconduct in his research and scholarship, including misreporting of research data, problematic statistical techniques, failure to properly document and preserve research results, and inappropriate authorship”.
Was Wansink a lone bad apple in the research barrel? Melbourne economist Jason Murphy thinks not.
In his book, Incentivology: The forces that explain tremendous success and spectacular failure , Murphy uses the Wansink case as an illustration of what he calls “the broken incentive structures of science”.
Incentives, Murphy writes, are a powerful and necessary tool in any field of endeavour, but are easily exploited if we are not vigilant.
A prime example of how incentives can go wrong comes from the French colonial project to construct sewers in Hanoi at the turn of the 20th century.
The colonialists began by building tunnels to channel waste away from the mansions of the Vietnamese city’s European quarter, creating in the process a palatial new habitat for the city’s rats who began to reproduce in unprecedented numbers.
Fearful of disease, the French brought in an incentive scheme, encouraging locals to kill rats by paying for every tail brought in as proof.
It was a huge success: in a single month, the number of rats killed per day grew from 1000 to 15 000.
However, all was not as it seemed. The new market for rat tails had led locals to start breeding the animals.
When the French discovered the rat farms, they abandoned their incentive scheme, which presumably led to thousands of rats being released into the city. Not long after, bubonic plague came to Hanoi.
What does that kind of failure mean for contemporary scientific research?
Among the structures Murphy identifies as broken are the lack of incentives to publish negative findings, the imperative to keep findings secret until published, and the difficulty in getting discredited research retracted.
On top of that, there’s the journals’ laborious system of pre-publication peer review.
“Making gatekeepers anonymous, unpaid and powerful … and having them hand-picked from among your potential competitors? That is about the worst conceivable approach,” Murphy writes.
He suggests the introduction of a Wikipedia-style process of post-publication review and revision would bring benefits (though he stresses he’s not suggesting anybody with an internet connection should be able to edit published research).
“Anyone aghast at the idea that Wikipedia-esque processes might improve the current processes of science should remember that Wikipedia was expected to be a disaster but very much isn’t, while science is expected to be nearly perfect and very much isn’t,” he writes.
Perhaps the biggest issue Murphy identifies with publication of scientific research is speed … or rather the lack of it.
The process has scarcely changed since the appearance of what is often considered to be the first scientific journal, Philosophical Transactions of the Royal Society, in 1665, he argues.
The slow pace of publication can delay implementation of potentially life-saving discoveries and lead to wasteful duplication of effort, but perhaps an even bigger problem is the time it can take to get bad or inaccurate research retracted.
“A system that self-corrects slowly is a failed system,” Murphy writes.
In light of the 12 years it took The Lancet to retract Andrew Wakefield’s fraudulent paper claiming a link between vaccines and autism , he might just have a point.
Jane McCredie is a Sydney-based health and science writer.
The statements or opinions expressed in this article reflect the views of the authors and do not represent the official policy of the AMA, the MJA or InSight+ unless so stated.