A depiction of the world's oldest continually operating university, the University of Bologna, Italy (Photo credit: Wikipedia) |
Susan Wright, Bruce Curtis, Lisa Lucas and Susan Robertson provide a basic outline of their working paper on how performance-based research assessment frameworks in different countries operate and govern academic life.
They find that assessment methods steer academic effort away from wider purposes of the university, enhance the powers of leaders, propagate unsubstantiated myths of meritocracy, and demand conformity.
But the latest quest for ‘impact’ may actually in effect unmask these operations and diversify ‘what counts’ across contexts.
Our working paper Research Assessment Systems and their Impacts on Academic Work in New Zealand, the UK and Denmark arises from the EU Marie Curie project ‘Universities in the Knowledge Economy’ (URGE) and specifically from its 5th work package, which examined how reform agendas that aimed to steer university research towards the ‘needs of a knowledge economy’ affected academic research and the activities and conduct of researchers.
This working paper has focused on Performance-Based Research Assessment systems (PBRAs). PBRAs in the UK, New Zealand and Denmark now act as a quality check, a method of allocating funding competitively between and within universities, and a method for governments to steer universities to meet what politicians consider to be the needs of the economy.
Drawing on the studies reported here and the discussions that followed their presentation to the URGE symposium, four main points can be highlighted.
Narrowing of the Purpose of the University
PBRAs gained renewed purpose when governments accepted the arguments of the OECD and other international organisations that, in a fast approaching and inevitable future, countries had to compete over the production and utilisation of knowledge and in the market for students (Wright 2012).
Governments saw universities as the source of these new raw materials, and PBRAs became important mechanisms to steer universities in particular directions.
However, they are quite a blunt instrument: PBRAs’ assessment methods prioritise ‘academic’ publications, which have notoriously few readers but which are heavily weighted in global rankings of universities.
This focus is therefore appropriate where governments aim for their universities to claim ‘world class’ status in order to attract global trade in students. However, such an instrument steers academic effort away from other purposes of the university, which might also be part of government’s aims, for example transferring ideas to industry or more widely contributing to social debates and democracy.
In all cases, PBRAs capture only certain aspects of the university, with the danger of narrowing and impoverishing of the mission of the university.
Image credit: Ivy Dawned (Flickr, CC BY-NC-SA)
Glorification of LeadersJust as measures become targets, so such steering tools become the main rationale of management and are used by them to reshape the university. One of the points raised in discussion at the URGE symposium was how governments’ steering of universities through such measures relies on enhancing the powers of leaders.
Lucas (2006; 2009) has shown how the history of the UK’s Research Assessment Exercise (RAE) is paralleled by the emergence of a managerial class to control the university’s performance.
Robertson’s case study records how yet another new administrative apparatus was developed to advise and quality control academics in the devising and writing of ‘impact’ case studies for the Research Excellence Framework (REF, which replaced the previous RAE).
These systems of steering universities have not only contributed to what in the U.S. is called universities’ ‘administrative bloat’ (Ginsberg 2011) but also what was referred to in the URGE symposium as the ‘glorification’ of vice chancellors.
When university managers’ Key Performance Indicators in New Zealand and the UK are based on improving their university’s status in national and global rankings, they become organisational imperatives.
A new language has emerged that speaks of the violence involved in the RAE, for example, ‘cutting off the tail’ of departments - getting rid of academics who, regardless of any other qualities and contributions, score low in RAE-able publications.
In New Zealand, the PBRA rationale has not taken over the life of the university so compulsively and other narratives about the purpose of the university are still available.
Myths of the Level Playing Field
PBRAs are accompanied by rhetoric that their standardised metrics obviate favouritism and install meritocratic advancement. It was argued at the URGE symposium that before there used to be baronial departments and only the head of department’s (usually male) cronies succeeded.
Now, the argument goes, there are clear criteria for promotion and funding, and all can strategise, individually, to succeed. Such transparent criteria should lead to both excellence and equity.
Yet, the new metric for promotion fetishises external funding and Curtis’ analysis also reveals that the PBRF systematically disadvantages women, those trained in New Zealand, and those studying New Zealand issues. In the UK, the RAE also systematically disadvantages women.
Robertson’s analysis of the shift from RAE to REF in the UK clearly shows the systematic disadvantages of different systems.
Subjects like nursing, public policy and some humanities, which had done badly under the RAE’s focus on academic publications were now good at demonstrating ‘impact’ in the REF.
For these subjects, the income from REF ‘impact’ would make a considerable difference whereas, for some other subjects, such as engineering, the cost in academic time to put together REF cases demonstrating their undoubted ‘impact’ would not yield sufficient returns, compared to their other sources of income.
Dangerous Coherence
PBRAs act as tools of governance when their definition of ‘what counts’ pervades government steering, university management and academic identity formation (Wright forthcoming).
Unambiguous definitions of what counts provide clear messages to university staff and managers who then act accordingly and perhaps not in line with other government indicators.
The recent inclusion of ‘impact’ in the REF reflected governmental concern that the previous RAE’s primary focus on each academic producing four articles in top journals had eroded the capacity for staff to provide policy advice. The inclusion of impact broadens and complicates ‘what counts’.
In this respect Curtis (2007; under review) has noted how the PBRF in New Zealand provides mixed messages to university managers. New Zealand universities also have a legal obligation to be the ‘critic and conscience of society’.
Similarly Danish universities have a legal obligation to engage with and disseminate their research to ‘surrounding society’. Both would have the potential to diversify ‘what counts’ if performance and funding measures were devised in keeping with their legal obligations.
Hopefully, the UK’s quest for ‘impact’ will have a wider impact, of unmasking the operations of PBRAs as political technologies and their role in a pervasive form of governance that is narrowing and impoverishing the public purpose of the university.
This piece is based on the findings in the 2014 working paper Research Assessment Systems and their Impacts on Academic Work in New Zealand, the UK and Denmark, Working Papers in University Reform no. 24. Copenhagen: DPU, Aarhus University, April.
Note: This article gives the views of the authors, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.
About the Authors
Susan Wright is Professor of Educational Anthropology at the Department of Education (DPU), Aarhus University. She studies people’s participation in large scale processes of transformation - most recently, academics’, managers’ and policy makers’ engagement with Danish university reforms. Universities were one of several sites through which she studied changing forms of governance in the UK since the 1980s and informing all her work are insights gained from studies of political transformation in Iran before and after the Islamic Revolution.
Bruce Curtis lectures in Sociology at the University of Auckland. His research interests include organisations, neo-colonialism, academic life and methodologies.
Lisa Lucas is a sociologist of higher education based in the Centre for Globalisation, Education and Societies in the Graduate School of Education, University of Bristol. Her research involves looking at global comparisons of higher education policy, particularly research policy and global league tables. She is also interested in the nature of academic work and identity as well as issues around social justice and widening participation to higher education.
Susan L. Robertson is Professor of Sociology of Education, Graduate School of Education, University of Bristol, UK.
No comments:
Post a Comment