Metrics are widely and increasingly used in academia, and they hold significant importance in various contexts. They can play an important role in job and funding applications, promotions, and institutional rankings. Researchers may also be interested in monitoring the impact of their research and outputs through metrics.
However, inappropriate use of bibliometrics without considering their biases and weaknesses may result in degrading trust in scholarship and breaking the research culture. For example, the Journal Impact Factor (JIF) has been used as a proxy for the quality of individual research outputs despite being originally designed to measure journal characteristics only rather than the quality of individual articles. Similarly, using metrics to compare institutions or academics in different disciplines may create misleading results (check the limitations of metrics to find out more).
It is always recommended that metrics be used responsibly. Responsible use of metrics is an umbrella term covering a wide range of good practices suggested or adopted by research institutions and initiatives. Its main aim is to acknowledge and highlight the weaknesses or biases of bibliometrics and to advocate for the appropriate use of them. The following is not an exhaustive list but a very broad set of guidelines to help you get familiar with the responsible use of metrics.
Metrics should never replace expert peer review to assess research performance of individuals, departments, or institutions. Indicators (where available) should support qualitative assessment in all areas where research assessment is required such as recruitment, promotion, funding allocation, and reward.
The purpose of using metrics should be clearly and appropriately defined and contextualised in advance. It is not good practice to monitor indicators without setting the appropriate question first. Metrics can provide really useful responses only to well-considered questions.
When using bibliometrics, it is important to remember that all metrics and databases, whether traditional or alternative, will have weaknesses and biases. Without properly considering the biases, the analysis is always at risk of being incomplete and flawed which may lead to decisions with unintended consequences. The use of title or impact factor of journals in assessing the quality of research and outcomes must be avoided. Also, using indicators (such as h-index) to compare researchers is not a good practice because they do not consider individual circumstances or disciplinary differences.
Transparency is important when evaluating research. Always be explicit in your analysis about the methods, criteria, databases, or analysis products used for the assessment.
The value and impact of all research outputs (not only journal articles and conference proceedings) should be considered. Contributions of a researcher, department or institution to research can be very diverse. Journal articles and conference proceedings may be dominant output types in academia, but this is not the case for all disciplines. Also, recognition should consider research outcomes whose impacts cannot be counted via traditional methods.
Responsible use of metrics applies to everyone within the research infrastructure including researchers, managers, HR managers, head of departments, funders, publishers, data providers or even governments. Each part must understand the limitations and proper use of metrics to foster a fair and inclusive assessment process.
When governing, managing, and assessing research and research outcomes by using quantitative indicators, the following dimensions should be considered as detailed in the Metric Tide Report:
- Robustness: basing metrics on the best possible data in terms of accuracy and scope,
- Humility: recognising that quantitative indicators should support not supplant qualitative expert assessment,
- Transparency: keeping data collection and analysis processes open and transparent, so that those being evaluated can test and verify the results,
- Diversity: accounting for variation by field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system,
- Reflexivity: recognising and anticipating the systemic and potential effects of indicators and updating them in response.