On this page, an A-Z glossary of terms related to citations.
Altmetrics - describes emerging technologies and practices enabling the measurement of web-based activity related to research outputs.
Bibliographic coupling - is the measurement of the frequency with which two papers reference the same paper. The "coupling strength" of two given papers increases the more citations to other papers they have in common.
Bibliometrics - is a set of methods to statistically analyse books, articles or other publications. Citation Analysis is one of the most commonly used bibliometric methods.
Citation analysis - refers to the practice of investigating citation activity between books and/or articles. An underlying principles of citation analysis is the more times a document is cited in other documents the more impact it has had within its subject area.
Citation behaviour - refers to the practise of referencing other works, and is often used within the context of differences across specific fields of science. For example, chemistry papers will typically reference more papers than computer science papers.
Citation percentiles - are a method for normalising citation counts. Placing a papers citation count within a percentile range (e.g. top 5%) of all papers published in the same subject area, document type, and year allows for comparison of research performance across different subjects and year ranges.
Co-citation analysis - is the measurement of the frequency with which two papers are cited together by another paper. The underlying principle of this type of analysis is that the more co-citations two given papers share the stronger the similarity between those papers.
Eigenfactor - a means of measuring the importance of a scientific journal. Similar to the Impact Factor, the number of incoming citations that a journal attracts is used to generate the score. In contrast to the Impact Factor, however, citations from higher ranked journals are weighted greater than those from more poorly ranked journals.
g-index - a similar measure to the h-index but is designed to give more weight to highly-cited articles. It is defined as follows: [Given a set of articles] ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received (together) at least g2 citations. So for example, if an author has a g-index of 8 then they will have published 8 articles with at least 64 citations between them.
h-index - a means to measure both the productivity and impact of the published work of an academic. To calculate a h-index one would say that an author has an index of h if they have published at least h papers, each of which has been cited at least h times. So for example if an academic has had 7 papers each with at least 7 citations then their h-index is 7.
Impact Factor - a means of measuring the importance of a scientific journal. To calculate a journal's impact factor for any given year, take the average number of citations received per paper published in that journal during the two preceding years. For example, if a journal has an impact factor of 5 in 2012, then its papers published in 2010 and 2011 received 5 citations each on average in 2012.
InCites - is a research evaluation tool which utilises raw citation data from ISI Web of Science. The University of Manchester does not currently have a subscription with InCites.
Journal Citations Report (JCR) - is an annual publication by Thomson Reuters providing citation information about journals in the sciences and social sciences.
Matthew Effect - a term coined by Robert K Merton to describe how a high profile academic will often get more recognition than a lesser known academic simply by virtue of the fact that the former has a higher profile.
m-index - a means to measure both the productivity and impact of the published work of a scientist. The m-index is defined as h divided by n, where n is the number of years since the first published paper of the scientist. So for example if an academic has a h-index of 7 and has been publishing papers for 2 years then their m-index is 3.5.
Normalisation - describes the practice of contextualising citation data for purposes of meaningful comparison. The citation count of a given paper will commonly be normalised in respect of the average citation counts for that paper's subject area, the journal it appears in, and the year in which it was published.
Publish or Perish - is a research evaluation tool which utilises raw citation data from Google Scholar and Microsoft Academic Search.
SCImago - is a research evaluation tool which utilises raw citation data from the Scopus database.
SCImago Journal Rank (SJR) - ranks journals based not only on how many citations a journal attracts but also where the citations originate. Unlike the Impact Factor where every citation is equal, subject field, quality and reputation of the citing journal have an affect on the value of an individual citation.
SciVal - is a research evaluation tool which utilises raw citation data from Scopus. The University of Manchester currently has a subscription to SciVal.
Scopus - is a bibliographic database containing abstracts and citations for academic journal articles.
Self-citation - describes the practice of authors citing their own papers.
Source Normalised Impact per Paper (SNIP) - measures contextual citation impact based on the total number of citations in a subject field.
VOSviewer - is a freely available computer programme for creating and visualising bibliometric maps of science.
Web of Knowledge - is a bibliographic database containing abstracts and citations for academic journal articles.