Bibliometrics is the quantitative analysis of scholarly publications, intended to provide an indication of their impact on academic and public discourse. Traditionally, bibliometrics takes account of the number of times a research paper is cited, in order to compare it against other papers in the same field. Metrics for other forms of research output and other measures of impact are slowly becoming established.
- The University of York Policy for Research Evaluation Using Quantitative Data
- A Practical Guide to Bibliometrics for researchers and students
To indicate the impact of your own research or that of your research group:
To identify the most highly-cited researchers in a field, in order to:
To identify the most highly-cited journals in a field, in order to:
(Adapted from Bibliometrics Explained, University of Leeds, 2017.)
- The University of York Policy for Research Evaluation Using Quantitative Data
- A Practical Guide to Bibliometrics for researchers and students
Three main providers currently dominate the market for citation data:
In addition, Altmetric is a well-established provider of non-traditional metrics, tracking data about publications mentioned in policy documents, mainstream media and social media.
Access to citation analysis databases:
Search techniques and tips:
- The University of York Policy for Research Evaluation Using Quantitative Data
- A Practical Guide to Bibliometrics for researchers and students
(Adapted from Bibliometrics Explained, University of Leeds, 2017.)
- The University of York Policy for Research Evaluation Using Quantitative Data
- A Practical Guide to Bibliometrics for researchers and students
Responsible bibliometrics may be understood in terms of the following dimensions:
Bibliometric analysis is mainstream within the Anglophone academic community. Use of bibliometric data to inform an assessment of research performance may be more transparent and less vulnerable to bias than peer review, as well as cost-effective.
It is increasingly recognised, however, that no single bibliometric measure is sufficient to assess research quality, and that a subjective element will add depth.
The San Francisco Declaration on Research Assessment (DORA) originates from a 2012 scholarly conference at which participants recognised the "need to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties". It is particularly critical of the use of journal-level metrics as a surrogate measure of author impact. DORA has been signed by representatives of 859 institutions worldwide, including the University of York (as of May 2018).
In 2015, a team of five academics released the Leiden Manifesto for Research Metrics: ten principles "distilling best practice in metrics-based research assessment, so that researchers can hold evaluators to account, and evaluators can hold their indicators to account". It had immediate impact and is highly-regarded as a framework for developing an institutional position.
- The University of York Policy for Research Evaluation Using Quantitative Data
- A Practical Guide to Bibliometrics for researchers and students
After the window closed for REF 2014 submissions, HEFCE instigated an Independent Review of the Role of Metrics in Research Assessment and Management, chaired by Professor James Wilsdon, to investigate "the current and potential future roles that quantitative indicators can play in the assessment and management of research".
The Review's report, The Metric Tide, was published in July 2015, calling for the research community to "develop a more sophisticated and nuanced approach to the contribution and limitations of quantitative indicators". Analysis of REF 2014 results concluded that author-level metrics "cannot provide a like-for-like replacement for REF peer review". However, a 2017 study reported in Nature Index demonstrated a positive correlation between journal impact factor and REF 4* outputs in some science disciplines.
Universities UK has since convened the Forum for Responsible Metrics: research funders, sector bodies and infrastructure experts working in partnership to consider "how quantitative indicators might be used in assessing research outputs and environments" in the context of the next REF, and "working to improve the data infrastructure that underpins metric use".
Research England's draft criteria for REF 2021 propose that some subject panels will use citation data "as part of the indication of academic significance to inform their assessment of output quality", and it's been announced that the data will be supplied by Clarivate. Panels are explicitly prohibited from using "journal impact factors or any hierarchy of journals" in their assessment of outputs.