Monday, January 13, 2014

Why Does So Much Research Go Unpublished?

by Deevy Bishop, Bishop Blog: http://deevybee.blogspot.com.au/2014/01/why-does-so-much-research-go-unpublished.html
As described in my last blogpost, I attended an excellent symposium on waste in research this week. A recurring theme was research that never got published.

Rosalind Smyth described her experience of sitting on the funding panel of a medium-sized charity. The panel went to great pains to select the most promising projects, and would end a meeting with a sense of excitement about the great work that they were able to fund.

A few years down the line, though, they'd find that many of the funds had been squandered. The work had either not been done, or had been completed but not published.

 In order to tackle this problem, we need to understand the underlying causes. Sometimes, as Robert Burns noted, the best-laid schemes go wrong. Until you've tried to run a few research projects, it's hard to imagine the myriad different ways in which life can conspire to mess up your plans.

The eight laws of psychological research formulated by Hodgson and Rollnick are as true today as they were 25 years ago.

But much research remains unpublished despite being completed. Reasons are multiple, and the strategies needed to overcome them are varied, but here is my list of the top three problems and potential solutions.

Inconclusive results

Probably the most common reason for inconclusive results is lack of statistical power.

A study is undertaken in the fond hope that a difference will be found between condition X and condition Y, and if the difference is found, there is great rejoicing and a rush to publish. A negative result should also be of interest, provided the study was well-designed and adequately motivated.

But if the sample is small, then we can't be sure whether our failure to observe the effect is because it is absent: a real but small effect could be swamped by noise.

I think the solution to this problem lies in the hands of funding panels and researchers: quite simply, they need to take statistical power very seriously indeed and to consider carefully whether anything will be learned from a study if the anticipated effects are not obtained.

If not, then the research needs to be rethought. In the fields of genetics and clinical trials, it is now recognised that multicentre collaborations are the way forward to ensure that studies are conducted with sufficient power to obtain a conclusive result.

Rejection of completed work by journals

Even well-conducted and adequately powered studies may be rejected by journals if the results are not deemed to be exciting. To solve this problem, we must look to journals.

We need recognition that - provided a study is methodologically strong and well-motivated - negative results can be as informative as positive ones. Otherwise we are doomed to waste time and money pursuing false leads.

As Paul Glasziou has emphasised, failure is part of the research process. It is important to tell people about what doesn't work if we are not to repeat our mistakes.

We do now have some journals that will publish negative results, and there is a growing move toward pre-registration of studies, with guaranteed publication if the methods meet quality criteria. But there is still a lot to be done, and we need a radical change of mindset about what kinds of research results are valuable.

Lack of time

Here, I lay the blame squarely on the incentive structures that operate in universities. To get a job, or to get promoted, you need to demonstrate that you can pull in research income.

In many UK institutions this is quite explicit, and promotions criteria may give a specific figure to aim for of X thousand pounds research income per annum. There are few UK universities whose strategic plan does not include a statement about increasing research funding.

This has changed the culture dramatically; as Fergus Millar put it: "in the modern British university, it is not that funding is sought in order to carry out research, but that research projects are formulated in order to get funding".

Of course, for research to thrive, our Universities need people who can compete for funding to support their work. But the acquisition of funding has become an end in itself, rather than a means to an end.

This has the pernicious effect of driving people to apply for grant after grant, without adequately budgeting for the time it takes to analyse and write up research, or indeed to carefully think about what they are doing.

As I argued previously, even junior researchers these days have an 'academic backlog' of unwritten papers.

At the Lancet meeting there were some useful suggestions for how we might change incentive structures to avoid such waste.

Malcolm MacLeod argued researchers should be evaluated not by research income and high-impact publications, but by the quality of their methods, the extent to which their research was fully reported, and the reproducibility of findings.

An-Wen Chan echoed this, arguing for performance metrics that recognise full dissemination of research and use of research datasets by other groups. However, we may ask whether such proposals have any chance of being adopted when University funding is directly linked to grant income, and Universities increasingly view themselves as businesses.

I suspect we would need revised incentives to be reflected at the level of those allocating central funding before vice-chancellors took them seriously. It would, however, be feasible for behaviour to be shaped at the supply end, if funders adopted new guidelines.

For a start, they could look more carefully at the time commitments of those to whom grants are given: in my experience this is never taken into consideration, and one can see successful 'fat cats' accumulating grant after grant, as success builds on success.

Funders could also monitor more closely the outcomes of grants: Chan noted that NIHR withholds 10% of research funds until a paper based on the research has been submitted for publication.

Moves like this could help us change the climate so that an award of a grant would confer responsibility on the recipient to carry through the work to completion, rather than acting solely to embellish the researcher's curriculum vitae.

References

Chan, A., Song, F., Vickers, A., Jefferson, T., Dickersin, K., Gotzsche, P., Krumholz, H. M., Ghersi, D., & van der Worp, H. B. (2014). Increasing value and reducing waste: addressing inaccessible research Lancet (8 Jan ) : 10.1016/S0140-6736(13)62296-5

Macleod, M. R., Michie, S., Roberts, I., Dirnagl, U., Chalmers, I., Ioannidis, J. P. A., . . . Glasziou, P. (2014). Biomedical research: increasing value, reducing waste. Lancet, 383(9912), 101-104.

No comments:

Post a Comment