Research Impact

Persistent Identifiers (PIDs)

What is a Persistent Identifier (PID)?

A PID is a unique, long-lasting digital identification code to an object (e.g. publications, data, software, etc.), person (e.g. researcher, author, contributor, etc.), or organization (e.g. funder, institution, etc.). It remains constant and reliable even if the location or web address changes. A PID may be connected to a set of metadata describing the object, person, or organization. Commonly, ORCID iD is used as a PID for people, and DOIs for publications and various research outputs.

What is an ORCID iD?

ORCID iDs are increasingly used across different research workflows and organizations as a persistent identifier for a person. An ORCID is a unique and persistent identifier to identify, link and discover researchers, authors, contributors etc much like a Social Security Number (SSN). ORCID allows individuals to register a unique PID which can be used to manage research outputs and scholarly workflows.

How to use an ORCID iD?

Your ORCID iD can be used throughout the research process, such as when a researcher applies for a grant or award, writes a data management plan, deposits a dataset into a repository, publishes a journal article, and attends a conference. You can also link it with other IDs (see below) such as your Stanford Profiles, and list it on your CVs, resumes, web pages, email signatures, and other public profiles. 

For more information on ORCID, visit the ORCID guide

Other PIDs for People

  • Scopus ID for authors with publications in Scopus
  • Google Scholar Profiles for authors and groups registered with Google Scholar Profiles

For more information on PIDs for people, visit the Research Impact guide

What is a Digital Object Identifier (DOI)?

DOIs are the most widely known and used PIDs for research outputs, such as journal articles, conference proceedings, preprints, protocols, datasets, etc. DOIs are different from URLs because DOIs are designed to be persistent and stable in identifying the object on the web. It provides lasting information on where objects can be found on the internet. Over time, a URL might become a dead link because the web address might change or disappear. 

Since a DOI never changes, it aids in citation tracking (e.g measures where research is being used and referenced), increases data sharing and reuse (e.g. makes information discoverable) and becomes part of a repository, journal, database, and other scholarly workflows. 

 

How to get a DOI?

DOIs are provided by DataCite, CrossRef, and other DOI registration agencies, coordinated by the International DOI Foundation (IDF). It is assigned to an object that is to be shared with a community and/or managed as intellectual property. 

The Stanford Digital Repository (SDR) offers DOI services to Stanford affiliates. Depositing to the SDR is not required to receive a DOI. For more information, visit https://library.stanford.edu/dois-digital-object-identifier

For datasets or unpublished works (including preprints):

  • Upload your work to a repository that provides DOIs.
  • Examples: DRYAD (datasets), CORE (humanities), bioRxiv (reprints), arXiv (preprints), OSF (science disciplines), Zenodo (multidisciplinary)

Visit the Data Management and Sharing guide to learn more. 

For published work:

  • Most publishers provide DOIs upon publication. If they don't, consider self-archiving your work in a repository if permitted by the publisher.

Can't find a DOI?

Not all publications have DOIs. If you are unable to find a DOI, you can use CrossRef or DataCite to verify whether or not a publication has one. 

Other PIDs for publications and data

  • ISBN for books 
  • ISSN for journals
  • PMID for PubMed materials

 

Alternative Metrics

"Altmetrics expand our view of what impact looks like, but also of what’s making the impact. This matters because expressions of scholarship are becoming more diverse” (Altmetrics Manifesto).

Alternative metrics (or altmetrics) are metrics that monitor and measure the reach and impact of scholarship online interactions, complementing the traditional measures of academic success using citation impact. Traditional measures of citation only represent one type of impact and does not reflect the changing scholarly ecosystem that has moved beyond print. Altmetrics tracks the online interaction with research outputs (e.g. articles, datasets, tools, software, videos, etc.) as a way of measuring research impact and reach. Altmetrics can address questions such as:

  • How many times has an article been downloaded?
  • Was it covered by new agencies and outlets?
  • How many times has it been shared on social media (e.g. Twitter, Facebook, Reddit, etc.)
  • Which countries are viewing my research?
Benefits Drawbacks
  • complement traditional citation metrics
  • capture elements of societal impact
  • offer speed and discoverability 
  • lack a standard definition for altmetrics
  • data is not normalized and can be hard to compare
  • time-dependent (older works may not have much altmetrics activity)
  • tracking issues 

 

Often times, you'll see altmetrics represented by a colorful donut. Each color of the donut reflects a type of impact monitored by altmetrics. 

The donut and Altmetric Attention Score – Altmetric

What Research Metrics Can't Do

Metrics cannot provide a simple measure to address complex impact questions. As a measure of attention, metrics can only tell you so much about the quality, success or impact of research and researchers. Bear in mind, research may receive attention for negative reasons (e.g., due to a controversial or flawed study). Metrics can also be influenced by a number of external factors, such as academic discipline and career stage, so care needs to be taken to make sure your judgement isn't swayed by these factors. More importantly, metrics are intended to be used in conjunction with qualitative measures such as peer-review. 

Some general issues with metrics are summarized below:

●    Lack of generally accepted definition of impact
●    Lack of harmonization with metrics and terms
●    Time lag between research discovery and application in society
●    Difficult to establish a direct correlation from a specific research output
●    Documentation and calculations may not be publicly available 
●    Assessment varies by type of research and discipline 

Choosing an Appropriate Metric

As you may have discovered in the guide so far, there are many different metrics to evaluate research impact. To understand what a metric means, how it is calculated, and if it's a good match for your impact question, explore the Metrics Toolkit. Some elements to consider when choosing a metric:

  • Disciplines vary in their publication practices and, as a result, citation patterns differ. Research outputs are cited more frequently in certain disciplines. It is important to compare researchers within their discipline areas to avoid erroneous benchmarking.  Also bibliometrics focus on measurement of citations, mostly in journal articles. Disciplines such as arts and humanities and social sciences rely less on journal publications.
  • Metrics have the potential to be ‘gamed’, so that self-citing and the citing of close colleagues can artificially boost citation counts.
  • A citation in itself is not automatically a measure of prestige. A paper may be cited because it is an example of ‘bad’ research.
  • Sources used to supply data for citation counting differ in coverage, indexing different journals – results will vary depending on the data source used.
  • When evaluating researchers, citation counting by itself will automatically favor experienced researchers over those at an earlier stage in their career, because of the accumulation of research outputs over a longer time period.