University of Western Australia (Photo credit: Wikipedia) |
Increasingly, it’s not the quality of the research or researcher that is determining who gets funding in Australia’s universities but the reputation of the institutions they work for.
This is now reflected in grant processes, with increasing time being spent arguing the “institutional fit” between researcher, project and university.
Prospective education minister Christopher Pyne this week called for reducing the amount of time researchers spend on grant applications by screening out those who aren’t likely to be successful.
“Of course, the grants process should be open to everyone … but we could save a lot of time, energy, and emotional investment by only requiring those who are genuinely in with a shot to undertake a full grant application,” he said.
There is broad support for this position in the sector - but the devil is in the detail. Processes to screen out sub-standard proposals are fine, so long as they focus on the research itself and not the reputation of the university.
As strange as it might sound, there are very good reasons why we need to fund “average” research.
How funding works
In Australia, federal government research funding is partly tied to institutional performance in the Excellence in Research for Australia exercise. The ERA evaluates the quality of universities' research through measures like research citations and expert panels in predefined fields of research (e.g. engineering, history and archaeology).
Each university is ranked from one to five. A rank of one means the research is well below world standard and a rank of five means it is well above. The higher a university scores in each field, the more money it gets via the Sustainable Research Excellence program.
Certainly, when it comes to publicly-funded research, universities can hardly complain about proving that taxpayers are getting “the best bang for their buck”. However, now specific grant schemes are starting to include ERA ranks in their assessment processes.
For example, the Office of Learning and Teaching is currently calling for applications for research to enhance the training of maths and science teachers.
The instructions to applicants advise that only “institutions whose [ERA] rating in mathematics, physics, chemistry, biology, earth science, environmental science or education was 4 or above are eligible to lead projects”.
So it is not only the quality of the researchers that matters but also the reputation of the universities they work for.
ERA obsession
There are real implications if this trend continues. For example, in the last ARC Discovery round, worth A$250 million, a quarter ($63.3 million) of all funding was given to researchers working in “average”, “below average” or “well below average” departments.
So does this mean that one research dollar out of four was wasted? Of course not, but an increasing obsession with ERA ranks is leading politicians and policymakers to infer this.
If the Discovery grants were only given to those universities who received an ERA score of four or a five in the relevant field of research, 21 universities would have lost more funding than they gained in the reallocation.
In fact, ten universities would have received no funding at all, because all the grants they won were from researchers working in, supposedly, “poor” departments.
The big winners would have been the Group of Eight universities, taking A$36 million from the other institutions in this one scheme, in one year, alone.
If a future government used ERA rankings as the main criterion to direct research funds, the national research landscape would ossify.
In the time between assessment exercises (probably five years but perhaps as much as seven), the most highly-ranked universities would have preferential access to major government research schemes, meaning that by the time the next round was held they would be in the best position to demonstrate research at the level required to access the next tranche of research funds.
Narrow fields
Allowing a sole criterion to be a significant determinant of research funding has the potential to do greater harm than good. It would also be just as likely to breed complacency within these universities, than be a spur to improve the quality of research.
Whilst these institutions would help Australia look good in international ranking tables, such a policy would be detrimental to the national benefit gained from a more diverse and innovative research sector.
At the individual level, more and more researchers would be forced (rather than encouraged) to switch universities in order to conduct their research.
Universities must be accountable - but not to a single number. An obsession with ERA rankings says little about a government’s desire to foster a culture of innovative research in Australia but a lot about the political desire to back a winner.
Change ahead
Given this year we will almost certainly see a change in government, the position of the Coalition is very relevant.
Shadow higher education minister Brett Mason is on record as wondering whether quality in education will be achieved by having “one of our universities in the world’s top ten [or instead where] excellence is spread more evenly across all participants?”
However, as is frequently the case with politicians of all affiliations, he prefers not to provide an explicit answer.
Advancing an innovative research culture requires politicians to accept that they cannot predetermine winners with any degree of certainty - they need to commit funds, in diverse fields of research, on the basis of potential, not just the previous record of a particular institution.
By all means rank universities and let those ranks be a source of institutional pride (as well as unprofessional point-scoring and bitter debate). Just don’t let the ranks drive the more serious business of fostering excellent research.
Tim Pitman does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.
This article was originally published at The Conversation. Read the original article.
No comments:
Post a Comment