John Curtin School of Medical Research, ANU (Wikipedia) |
New funding expected to encourage world class excellence in research is not enough for the work involved in measuring the research, says Australian Nobel prizewinner Professor Brian Schmidt.
In a perspective piece in today’s Nature, Schmidt says when the Excellence in Research For Australia (ERA) initiative was launched in 2008 there was an understanding it would impact on funding decisions.
Under ERA, all research done by Australian universities is assessed against national and international standards. It is then graded on a five point scale with a score of three meaning the research is at world standard.
Schmidt says the scale was then to be used to allocate extra funding as an incentive to reward the highest quality work.
“On a scale of one to five the two lowest ratings attract nothing, whereas the top rating (five or ‘well-above world-standard’) earns seven times that of the middle rating (three, or simply ‘world-standard’),” he says.
But after two rounds of ERA grading - in 2010 and 2012 - he says the amount of money available for such incentives has “all but vanished”. He says the A$116 million available through ERA is “trivial” compared to the overall budget available for higher education research and development.
“Given that the government has spent A$43.5 million on the ERA, and universities themselves have outlaid substantial sums to undertake the ERA evaluations since 2008, one might question the value of this exercise that awards so little money,” he says.
The benefits of ERA
But Schmidt, who is Professor of Astronomy at the Australian National University, says ERA had done much to influence where Australia’s A$9 billion was spent on higher education R&D.
“By focusing its assessment on research quality, rather than quantity, the ERA has helped elevate the research at many of Australia’s universities,” he says.
It also allows universities to measure themselves against others and this will help them to invest more strategically on any future research.
But Schmidt questions whether or not the current ERA ranking scheme will continue to influence the sector given so little money is at stake.
He suggests it would be better to use such an assessment scheme to provide the full costs towards research similar to what is done in the UK, USA and Canada, with ERA topping the research funds up.
“Unfortunately, Australian grants provide nowhere near the full cost of research; significant cross-subsidisation is required from student fees,” Schmidt says in the Nature article.
“This undesirable method of research funding is unfair to students who believe they are paying for their education but are in fact paying for the country’s research.”
Broad agreement
The professor’s comments, published as part of a Nature special looking at research in Australia and New Zealand, were welcomed by others who have been critical of the current ERA and Australia’s research funding system.
Paul Jensen, Professor of Innovation, Science and Technology Policy at Melbourne University, this week raised the question of whether Australia was spending its A$9 billion wisely on research.
He too is concerned over the costs outlaid by both government and universities on data collection, although he said the full potential value from the ERA rankings has still to be unlocked.
“Given that the ERA system exists for the foreseeable future, tying more money to the ERA outcomes would be beneficial in promoting the goal of research excellence in Australian universities,” Jensen said.
Schmidt has also questioned the frequency of the ERA rankings with the next round due next year. He says very little will probably have changed within universities so questions what would be gained from them undergoing “this formidable exercise again”.
Jensen said the ERA was only one of many mechanisms that were likely to shape the future contours of the Australian higher education system.
“In isolation, it is hard to see how the ERA could result in changes which might necessitate conducting it every three years,” he said.
Dr Tseen Khoo, from La Trobe University’s Research Education and Development Unit, said Schmidt was right to question whether ERA was worthwhile for universities.
“Particularly when the costs involved for universities are much higher than purely administering the data collection and submission,” she said.
Khoo said some universities tried to better their chances in the ERA rankings with inter-institutional poaching of researchers before each ERA census date, and heavy internal investment in ERA-worthy areas of research.
“While I’m in strong agreement with Brian about the need for consistent and higher investment in the higher education R&D sector, whether ERA metrics do indeed capture or encourage research excellence is open to question,” she said.
Professor Matthew Bailes, the Pro-Vice Chancellor for Research at Swinburne University of Technology, said any move from measuring “quantity” to “quality” should be applauded.
But Schmidt has also questioned how ERA rankings may be used following this year’s government budget to move toward deregulating university fees, a point that has Bailes concerned.
“The recently proposed changes to the higher education sector have the danger of making students the source of research funding, not the taxpayer at large,” Bailes said.
“It would be preferable for governments to dictate the level of research spending, not vice chancellors.”
This article was originally published on The Conversation. Read the original article.
No comments:
Post a Comment