Ageing Research Reviews (Photo credit: Wikipedia) |
by Deevy Bishop, Bishop Blog: http://deevybee.blogspot.co.uk/2014/01/off-with-old-and-on-with-new-pressures.html
Yesterday I escaped a very soggy Oxford to make it down to London for a symposium on "Increasing value, reducing waste" in Research.
The meeting marked the publication of a special issue of the Lancet containing five papers and two commentaries, which can be downloaded here.
I was excited by the symposium because, although the focus was on medicine, it raised a number of issues that have much broader relevance for science, including several that I have raised on this blog, including pre-registration of research, criteria used by high-impact journals, ethics regulation, academic backlogs, and incentives for researchers.
It was impressive to see that major players in the field of medicine are now recognizing that there is a massive problem of waste in research. Better still, they are taking seriously the need to devise ways in which this could be fixed.
I hope to blog about more of the issues that came up in the meeting, but for today I'll confine myself to one topic that I hadn't really thought about much before, but which I see as important, namely the importance of doing research that builds on previous research, and the current pressures against this.
Iain Chalmers presented one of the most disturbing slides of the day, a forest plot of effect sizes found in medical trials for a treatment to prevent bleeding during surgery.
Time is along the x-axis, and the horizontal line corresponds to a result where the active and control treatments do not differ. Points which are below the line and whose fins do not cross it show a beneficial effect of treatment.
The graph shows that the effectiveness of the treatment was clearly established by around 2002, yet a further 20 studies including several hundred patients were reported in the literature after that date.
Chalmers made the point that it is simply unethical to do a clinical trial if previous research has already established an effect. The problem is that researchers often don't check the literature to see what has already been done, and so there is wasteful repetition of studies.
In the field of medicine this is particularly serious because patients may be denied the most effective treatment if they enrol in a research project.
Outside medicine, I'm not sure this is so much of an issue. In fact, as I've argued elsewhere, in psychology and neuroscience I think there's more of a problem with lack of replication. But there definitely is much neglect of prior research.
I lose count of the number of papers I review where the introduction presents a biased view of the literature that supports the authors' conclusions.
For instance, if you are interested in the relation between auditory deficit and children's language disorders, it is possible to write an introduction presenting this association as an established fact, or to write one arguing that it has been comprehensively debunked. I have seen both.
Is this just lazy, biased or ignorant authors? In part, I suspect it is. But I think there is a deeper problem which has to do with the insatiable demand for novelty shown by many journals, especially the high-impact ones.
These journals typically have a lot of pressure on page space and often allow only 500 words or less for an introduction. Unless authors can refer to a systematic review of the topic they are working on, they are obliged to give the briefest account of prior literature.
It seems we no longer value the idea that research should build on what has gone before: rather, everyone wants studies that are so exciting that they stand alone. Indeed, if a study is described as 'incremental' research, that is typically the death knell in a funding committee.
We need good syntheses of past research, yet these are not valued because they are not deemed novel. One point made by Iain Chalmers was that funders have in the past been reluctant to give grants for systematic reviews.
Reviews also aren't rated highly in academia: for instance, I'm proud of a review on mismatch negativity that I published in Psychological Bulletin in 2007.
It not only condensed and critiqued existing research, but also discovered patterns in data that had not previously been noted. However, for the REF, and for my publications list on a grant renewal, reviews don't count.
We need a rethink of our attitude to reviews. Medicine has led the way and specified rigorous criteria for systematic reviews, so that authors can't just cherrypick specific studies of interest. But it has also shown us that such reviews are an invaluable part of the research process.
They help ensure that we do not waste resources by addressing questions that have already been answered, and they encourage us to think of research as a cumulative, developing process, rather than a series of disconnected, dramatic events.
Reference
Chalmers, Iain, Bracken, Michael B., Djulbegovic, Ben, Garattini, Silvio, Grant, Jonathan, Gülmezoglu, A. Metin, Howells, David W., Ioannidis, John P. A., & Oliver, Sandy (2014). How to increase value and reduce waste when research priorities are set Lancet: 10.1016/S0140-6736(13)62229-1
No comments:
Post a Comment