Metrics

Metrics form part of an evolving and increasingly digital research environment, where data and analysis are important. However, the current description, production, and use of these metrics are experimental and open to misunderstanding. They can lead to negative effects and behaviours as well as positive ones.

Metrics fall under two areas – traditional, or bibliometrics – largely based on citations, and alternative metrics – largely based on the attention that an output receives.

Responsible metrics

Responsible metrics can be defined by the following key principles (outlined in The Metric Tide):

  • Robustness – basing metrics on the best possible data in terms of accuracy and scope
  • Humility – recognising that quantitative evaluation should support, but not supplant, qualitative, expert assessment
  • Transparency – that those being evaluated can test and verify the results
  • Diversity – accounting for variation by research field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system
  • Reflexivity – recognising and anticipating the systemic and potential effects of indicators, and updating them in response

Some common metrics to consider are:

  • Citations per publication: average number of citations received per publication
  • Collaboration impact: The average number of citations received by publications that have international, national or institutional co-authorship
  • Field-weighted citation impact: The ratio of citations received relative to the expected world average for the subject field, publication type and publication year
  • Outputs in the top citation percentiles: Publications that have reached a particular threshold of citations received
  • Outputs in the top journal percentiles: Publications that have been published in the world’s top journals
  • Scholarly output: the number of publications
  • Bibliometrics analyses the impact of research outputs using quantitative measures. Bibliometrics complements qualitative indicators of research impact such as peer review, funding received, and the number of patents and awards granted. Together they assess the quality and impact of research.

    You can use bibliometrics to:

    • provide evidence of the impact of your research outputs when applying for jobs, promotion or research funding
    • identify new and emerging areas of research
    • identify potential research collaborators
    • identify journals in which to publish
    • benchmark against other research groups/institutions

    Types of bibliometric measures

    Here are some common bibliometric measures:

    • Citation counts: the number of times a research output appears in the reference lists of other documents (articles, books, reviews, conference proceedings etc). Found in: Google Scholar, Scopus and Web of Science.
    • H-index: designed to measure an author’s productivity and impact. It is the number of an author’s publications (h) that have h or more citations to them. Found in: Google Scholar, Scopus and Web of Science.
    • Field-weighted citation impact: the ratio of citations received relative to the expected world average for the subject field, publication type and publication year. It can apply to a research output or group of research outputs. Found in SciVal.
    • Outputs in top percentiles: the number or percentage of research outputs in the top most-cited publications in the world, UK, or a specific country. Found in Scopus and SciVal.
    • Journal Impact Factor: based on the average number of citations received per paper published in that journal in the preceding two years. Found in Journal Citation Reports.
    • CiteScore: the average number of citations received in a calendar year by all items published in that journal in the proceeding three years. Found in Scopus
    • SCImago Journal Rank: places a higher value on citations from more prestigious journals. Found in Scopus
    • Scopus SNIP: a ratio of a journal’s citation count per paper and the citation potential in its subject field. The Scopus SNIP normalises citation rate subject differences. Found in Scopus.

    (Adapted from the Metrics Toolkit licensed under a CC-BY 4.0 licence.)

    The Metrics Toolkit has further information about various metrics that are available.

    Considerations when using bibliometrics include:

    • Quality: high citation counts may not indicate quality. For example, an article may be cited frequently because other authors are refuting its findings
    • Disciplinary patterns: some research areas cite papers more than others. For example, in medicine and health there is a strong culture of citing and using other articles to validate findings.
    • Level of researcher experience: some metrics are higher for experienced researchers than early career researchers. It is important not to compare researchers who are at different stages of their career.
    • Database coverage: the sources used to gather publication data may index different journals. The results will vary depending on which database you use.
  • “Altmetrics is the creation and study of new metrics based on the Social Web for analysing and informing scholarship.” (altmetrics.org)

    Altmetrics (or alternative metrics) provide article-level evidence of the societal impact of research, in terms of its mentions in social media (blogs, Facebook and Twitter), Wikipedia and other quasi-scholarly platforms, news sources and policy documents.

    Altmetrics can include:

    • tweets, mentions, shares or links
    • downloads, clicks or views
    • Policy mentions
    • media mentions
    • reviews, comments, ratings, or recommendations
    • adaptations or derivative works, and
    • readers, subscribers, watchers, or followers.

    Best uses: Altmetrics can help researchers understand how their outputs are being shared and discussed via social media and online and may supplement the information gained from traditional indicators.  It is important to note that, altmetrics are an indicator of the attention that an output has received, not necessarily the quality of the article!

    You can download the Altmetric bookmarklet for your own use. Install it in your browser to get one-click article level indicators when reading an online journal article (works with any journal article that a DOI).

    The University of Northampton has invested in Altmetrics for Institutions, you can access this to look at the attention that outputs for research centres, subject areas or Faculties are having, use your University log on details to register.

    Altmetric for Institutions is a web-based platform that enables you to search, monitor and measure online conversations surrounding the research outputs of specific authors, research groups and departments at your institution.

    This data can support institutions in a number of key areas, including:

    • Funding (applications, reporting and alumni donations) Support your faculty in finding evidence of broader impact and engagement for use in funding applications, and to showcase the attention research from your institution generates in order to encourage further support.
    • Reputation management Monitor early engagement and be aware of who is talking about your research, and what they’re saying. Ensure that the work of your institution and your researchers is accurately reported and communicated.
    • Benchmarking Access to the full Altmetric database means that you can compare the attention your research receives to that from peer institutions or research centres, and work to improve your comparative reach.
    • Inform strategic planning Understand who is talking about your research and where it is having most impact. Identify areas for improvement, collaboration and support research strategy decisions.
    • Drive marketing and communications activity Determine what effect your outreach activities are having, and monitor interest and dissemination of your research in markets around the world