Indicators

The CWTS Leiden Ranking Open Edition 2025 offers a sophisticated set of bibliometric indicators that provide statistics at the level of universities on scientific impact, collaboration, and open access publishing. The indicators are discussed in detail below.

Publications

The Leiden Ranking Open Edition is based on publications in the OpenAlex database. The most recent indicators made available in the Leiden Ranking Open Edition are based on publications in the period 2020–2023, but indicators are also provided for earlier periods.

Indicators are available for two sets of publications: (1) core publications only, and (2) core and non-core publications. Core publications are publications in international scientific journals in fields that are suitable for citation analysis. Non-core publications are publications in other sources, in particular publications in scientific journals with a national or regional focus.

The set of core and non-core publications includes all publications in the OpenAlex database that meet the following criteria:

  • The publication is classified as article, review, or book chapter in the OpenAlex database.
  • The publication has appeared in a source that is classified as journal, book, book series, or conference proceedings in the OpenAlex database.
  • The publication has at least one reference and at least one author with an affiliation in the OpenAlex database.
  • The publication has not been retracted.

The set of core publications is more restricted. In addition to the above criteria, core publications also meet the following criteria:

  • The publication has been written in English.
  • The publication has appeared in a source that is classified as journal or book series in the OpenAlex database.
  • The publication has appeared in a source that is classified as core journal based on the criteria presented below.

The last criterion is very important. A source is classified as core journal if it meets the following criteria:

  • The source has an international scope, as reflected by the countries in which researchers publishing in the source and citing the source are located.
  • The source has a sufficiently large number of references to other core journals, indicating that the source is situated in a field that is suitable for citation analysis. Many sources in the arts and humanities do not meet this criterion. The same applies to trade journals and popular magazines.

Size-dependent vs. size-independent indicators

Indicators included in the Leiden Ranking Open Edition have two variants: A size-dependent and a size-independent variant. In general, size-dependent indicators are obtained by counting the absolute number of publications of a university that have a certain property, while size-independent indicators are obtained by calculating the proportion of the publications of a university with a certain property. For instance, the number of highly cited publications of a university and the number of publications of a university co-authored with other organizations are size-dependent indicators. The proportion of the publications of a university that are highly cited and the proportion of a university’s publications co-authored with other organizations are size-independent indicators. In the case of size-dependent indicators, universities with a larger publication output tend to perform better than universities with a smaller publication output. Size-independent indicators have been corrected for the size of the publication output of a university. Hence, when size-independent indicators are used, both larger and smaller universities may perform well.

Scientific impact indicators

The Leiden Ranking Open Edition provides the following indicators of scientific impact:

  • P. Total number of publications of a university.
  • P(≥ 10 cit) and PP(≥ 10 cit). The number and the proportion of a university’s publications with at least 10 citations.
  • P(≥ 20 cit) and PP(≥ 20 cit). The number and the proportion of a university’s publications with at least 20 citations.
  • P(≥ 50 cit) and PP(≥ 50 cit). The number and the proportion of a university’s publications with at least 50 citations.
  • P(≥ 100 cit) and PP(≥ 100 cit). The number and the proportion of a university’s publications with at least 100 citations.
  • TCS and MCS. The total and the average number of citations of the publications of a university.
  • P(top 1%) and PP(top 1%). The number and the proportion of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 1% most frequently cited.
  • P(top 5%) and PP(top 5%). The number and the proportion of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 5% most frequently cited.
  • P(top 10%) and PP(top 10%). The number and the proportion of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 10% most frequently cited.
  • P(top 50%) and PP(top 50%). The number and the proportion of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 50% most frequently cited.
  • TNCS and MNCS. The total and the average number of citations of the publications of a university, normalized for field and publication year. An MNCS value of two for instance means that the publications of a university have been cited twice as much as the average of their field and publication year.

Citations are counted until the end of 2024 in the calculation of the above indicators. Author self–citations are excluded. The P(top x%), PP(top x%), TNCS, and MNCS indicators are normalized for differences in citation patterns between scientific fields. These indicators are available only for the set of core publications. The P(≥ x cit), PP(≥ x cit), TCS, and MCS indicators are non-normalized. These indicators are available only for the set of core and non-core publications. In the calculation of the normalized indicators, about 4500 scientific fields are distinguished. These fields are defined at the level of individual publications. Using a computer algorithm, each publication in the OpenAlex database is assigned to a field based on its citation relations with other publications. More information is provided in this blog post.

The TCS, MCS, TNCS, and MNCS indicators are not available on the main ranking page. These indicators can be found on the page for an individual university.

Collaboration indicators

The Leiden Ranking Open Edition provides the following indicators of collaboration:

  • P. Total number of publications of a university.
  • P(collab) and PP(collab). The number and the proportion of a university’s publications that have been co-authored with other organizations.
  • P(int collab) and PP(int collab). The number and the proportion of a university’s publications that have been co-authored by multiple countries.
  • P(industry) and PP(industry). The number and the proportion of a university’s publications that have been co-authored with organizations classified as industry in the OpenAlex database.
  • P(<100 km) and PP(<100 km). The number and the proportion of a university’s publications with a geographical collaboration distance of less than 100 km. The geographical collaboration distance of a publication equals the largest geographical distance between two addresses mentioned in the publication’s address list.
  • P(>5000 km) and PP(>5000 km). The number and the proportion of a university’s publications with a geographical collaboration distance of more than 5000 km.

Open access indicators

The Leiden Ranking Open Edition provides the following indicators of open access publishing:

  • P. Total number of publications of a university.
  • P(OA) and PP(OA). The number and the proportion of open access publications of a university.
  • P(gold OA) and PP(gold OA). The number and the proportion of gold open access publications of a university. Gold open access publications are publications in an open access journal.
  • P(hybrid OA) and PP(hybrid OA). The number and the proportion of hybrid open access publications of a university. Hybrid open access publications are publications in a subscription journal that are open access with a license that allows the publication to be reused.
  • P(bronze OA) and PP(bronze OA). The number and the proportion of bronze open access publications of a university. Bronze open access publications are publications in a subscription journal that are open access without a license that allows the publication to be reused.
  • P(green OA) and PP(green OA). The number and the proportion of green open access publications of a university. Green open access publications are publications in a subscription journal that are open access not in the journal itself but in a repository.

In the calculation of the P(OA) and PP(OA) indicators, a publication is considered open access if it is gold, hybrid, bronze, or green open access.

Counting method

The scientific impact indicators in the Leiden Ranking Open Edition can be calculated using either a full counting or a fractional counting method. The full counting method gives a full weight of one to each publication of a university. The fractional counting method gives less weight to collaborative publications than to non-collaborative ones. For instance, if a publication has been co-authored by five researchers and two of these researchers are affiliated with a particular university, the publication has a weight of 2 / 5 = 0.4 in the calculation of the scientific impact indicators for this university. The fractional counting method leads to a more proper field normalization of scientific impact indicators and therefore to fairer comparisons between universities active in different fields. For this reason, fractional counting is the preferred counting method for the scientific impact indicators in the Leiden Ranking Open Edition.

Collaboration and open access indicators are always calculated using the full counting method.

Trend analysis

To facilitate trend analyses, the Leiden Ranking Open Edition provides statistics not only based on publications from the period 2020–2023, but also based on publications from earlier periods: 2006–2009, 2007–2010, ..., 2019–2022. The statistics for the different periods are calculated in a fully consistent way. For each period, citations are counted until the end of the first year after the period has ended. For instance, in the case of the period 2006–2009 citations are counted until the end of 2010, while in the case of the period 2020–2023 citations are counted until the end of 2024.

Stability intervals

Stability intervals provide some insight into the uncertainty in bibliometric statistics. A stability interval indicates a range of values of an indicator that are likely to be observed when the underlying set of publications changes. For instance, the PP(top 10%) indicator may be equal to 15.3% for a particular university, with a stability interval ranging from 14.1% to 16.5%. This means that the PP(top 10%) indicator equals 15.3% for this university, but that changes in the set of publications of the university may relatively easily lead to PP(top 10%) values in the range from 14.1% to 16.5%. The Leiden Ranking Open Edition employs 95% stability intervals constructed using a statistical technique known as bootstrapping.

More information

More information on the indicators available in the Leiden Ranking can be found in a number of articles published by CWTS researchers. Field normalization of scientific impact indicators based on algorithmically defined fields is studied by Ruiz-Castillo and Waltman (2014). The calculation of percentile-based indicators of scientific impact is discussed by Waltman and Schreiber (2013). The methodology adopted in the Leiden Ranking for identifying core publications and core journals is outlined by Waltman and Van Eck (2013a, 2013b). The application of this methodology in the OpenAlex database is described by Van Eck and Waltman (2025). Finally, the importance of using fractional rather than full counting in the calculation of field-normalized scientific impact indicators is explained by Waltman and Van Eck (2015).

  • Ruiz-Castillo, J., & Waltman, L. (2015). Field-normalized citation impact indicators using algorithmically constructed classification systems of science. Journal of Informetrics, 9(1), 102–117. (paper)
  • Van Eck, N.J., & Waltman, L. (2025). A methodology for identifying core sources and core publications in OpenAlex. Zenodo. (report)
  • Waltman, L., & Schreiber, M. (2013). On the calculation of percentile‐based bibliometric indicators. Journal of the American Society for Information Science and Technology, 64(2), 372–379. (paper, preprint)
  • Waltman, L., & Van Eck, N.J. (2013a). Source normalized indicators of citation impact: An overview of different approaches and an empirical comparison. Scientometrics, 96(3), 699–716. (paper, preprint)
  • Waltman, L., & Van Eck, N.J. (2013b). A systematic empirical comparison of different approaches for normalizing citation impact indicators. Journal of Informetrics, 7(4), 833–849. (paper, preprint)
  • Waltman, L., & Van Eck, N.J. (2015). Field-normalized citation impact indicators and the choice of an appropriate counting method. Journal of Informetrics, 9(4), 872–894. (paper, preprint)