There has been much discussion over how useful citation metrics, like Google Scholar’s H-index, really are and to what extent they can be gamed.
Specifically there appears to be concern over the practice of self-citation as it varies widely between disciplines.
So what should academics make of self-citations?
Referring back to our Handbook on Maximising the Impact of Your Research, the Public Policy Group assess the key issues and advise that self-citations by researchers and teams are a perfectly legitimate and relevant aspect of disciplinary practice.
But individuals should take care to ensure their own self-citation rate is not above the average for their particular discipline.
In the social sciences self-citation is often considered problematic - some scholars see it as a case of ‘blowing your own trumpet’, while others may argue ‘If I don’t cite my work, no-one else will.’
For similar reasons, official bodies often ask for citations data to be adjusted so to exclude self-citations, as if these were somehow illegitimate when measuring academic performance.
Some bibliometric scholars also concur that self-citation should be excluded from citation counts, at least in undertaking comparative analyses of the research performance of individuals, research groups, departments and universities.
In this view self-citations are not as important as citations from other academics when determining how much of an authority an academic is within a field (Fowler and Aksnes 2007, 428).
To meet this demand to filter out self-cites some producers of bibliometric indicators have begun to identify and publish the proportion of self-citations in order to compare them with the number of citations to other authors.
However, there are also good grounds for objecting to this approach and for recognizing self-citations by individuals and research teams as a perfectly legitimate and relevant aspect of disciplinary practices in different parts of academia.
Figure 4.1 shows that there are very large and systematic differences between discipline groups in the proportion of all citations that are self-citation, ranging from a high of 42% for engineering sciences, down to a low of 21% for medical and life sciences.
Figure 4.1: Self-citation rates across groups of disciplines
Source: Centre for Science and Technology Studies, 2007. |
The social sciences and the humanities generally have low rates with a fifth to a quarter of citations being self-cites, whereas in the scientific STEM disciplines the rate is around a third. It seems deeply unlikely that this pattern reflects solely different disciplinary propensities to blow your own trumpet.
Rather the extent of the variation is likely to be determined most by the proportion of applied work undertaken in the discipline, and the serial development nature of this work.
Many engineering departments specialise in particular sub-fields and develop the knowledge frontier in their chosen areas very intensively, perhaps with relatively few rivals or competitors internationally.
Consequently if they are to reference their research appropriately, so that others can check methodologies and follow up effects in replicable ways, engineering authors must include more self-cites, indeed up to twice as many self-cites as in some other disciplines.
Similarly, quite a lot of scientific work depends on progress made in the same lab or undertaken by the same author. In these areas normatively excluding self-cites would be severely counter-productive for academic development. And doing it in bibliometrics work is liable to give a misleading impression.
In this view the lower levels of self-cites in the humanities and social sciences may simply reflect a low propensity to publish applied work in scholarly journals, or to undertake serial applied work in the first place.
The low proportion of self-cites in medicine (arguably a mostly applied field) needs a different explanation, however. It may reflect the importance of medical findings being validated across research teams and across countries (key for drug approvals, for instance).
It may also be an effect of the extensive accumulation of results produced by very short medical articles (all limited to 3,000 words) and the profession’s insistence on very full referencing of literatures, producing more citations per (short) article than any other discipline.
The ‘serial development of applied knowledge’ perspective on self-citation gains some additional evidence from the tendency of self-cites to grow with authors’ ages.
Older researchers do more self-citing, not because they are vainer but simply because in a perfectly legitimately they draw more on their own previous work than do young researchers who are new in a sub-field.
Older academics also do a great deal more applied work in the social sciences than younger staff, and as a consequence we show in Part B they also have far larger external impacts.
So they may have to cite their own corpus of work more for reasons similar to those dictating higher self-cite rates in engineering - namely that their work draws a lot on reports, working papers on and for external clients, or detailed case studies that may not have great journal publication possibilities.
So are self-citations a good or bad idea for academics? Our advice here is that all researchers should prudentially ensure that their own self-citation rate is not above the average for their particular discipline.
Figure 4.2 shows that there is some detailed variation within the social sciences, with political science and economics at a low 21%, but with psychology and education high in their rates of self-cites at 28% and 26%, respectively.
Figure 4.2 Self-citation rates for social sciences plus law
But it is equally not a good idea to ‘unnaturally’ suppress referencing of your own previous work. Some research has tested whether citing one’s own work tends to encourage other people to cite it as well.
After controlling for different factors, Fowler and Aksnes (2007) found that each additional self-citation increases the number of citations from others by about one citation after one year, and by about three after five years. Other scholars have also found that self-citations can be a useful promotion mechanism to increase citations from others.
These empirical studies reveal that self-citations can increase the visibility of someone’s work.
One possible logic behind this is that ‘Conscientious Scholar A’ doing a literature review may see ‘Author B’ in one of her best-known works including a citation to some of B’s lesser known pieces of research. Hence A becomes more likely to look at and cite B’s less well-known work - whereas if they were directed also to B’s better known works A’s citations would perhaps have more impact in growing B’s h score and g index.
We therefore recommend that academics do not actively avoid or minimise self-citations, as long as their level of use is in line with their discipline’s average rate. Self-citations may be useful to promote relevant original work that may otherwise pass unnoticed by others.
For senior academics, citing their own applied research outputs (such as research reports, client reports, news articles, blog posts, etc) makes sense because such outputs are often missed in standard academic sources.
For young researchers and academics, who are lesser-known in their field and have a smaller corpus of work to draw on, self-citations need to be handled carefully. They can be legitimately used to get visibility for key or supportive works that may not yet be published (such as working papers, research reports, or developed papers under review etc).
However, self-cites must only ever be used where they are genuinely needed and relevant for the articles in which they are included.
This is an extract from Maximising the Impacts of Your Research: A Handbook for Social Scientists.
Featured and top left image credit: Dan4th Nicholas (Flickr, CC BY 2.0)
Note: This article gives the views of the authors, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.About the Author
The Public Policy Group is a research team at the LSE specialising in academic research practices, government policy and public sector reform providing an interface between academia, the private, public and ‘third’ sector.
They run a group of blogs including the Impact Blog, LSE Review of Books, British Politics and Policy (BPP), European Politics and Policy (EUROPP), American Politics and Policy (USAPP) and Democratic Audit.
No comments:
Post a Comment