SIR Methodology

General considerations

  • The SIR (SCImago Institutions Rankings) is a characterization of institutions, grounded on three sets of ranges based on research, innovation and web visibility indicators.
  • The ranges of institutions for each of the indicators have been normalized on a scale of 0 to 100, in order that the published values of each indicator have no other role than determining the position of each institution respect to the other institutions to facilitate benchmarking.
  • Institutions have been selected using the sole criterion that they need to be research institutions with over 100 published works included in the SCOPUS database during the last year of the period of time.
  • In order to achieve the highest level of precision in the institutional ranges for the different indicators, it has been carried out an exhaustive manual process of disambiguation of the institutions names.
  • The sorting of institutions is generated each year using the results obtained in the five-year period ending in the current year. For instance, for the year 2012 the results used are those for the five-year period 2008-2012. The only exception is the case of web indicators which have only been calculated for the last year.
  • Institutions have been segmented by the countries to which they belong, including multinational institutions (MUL) which cannot be attributed to any country.
  • There are institutions, marked with an asterisk, grouping sub-institutions, which are marked with the abbreviated name of the ‘parent’ institution. In the case of the ‘parent’ institutions results always include the ‘children’.
  • Institutions are grouped by institutional sectors in order to be able to generate lists with a higher degree of institutional homogeneity.
  • The source of information used to generate the ranges for innovation has been PATSTAT
  • The sources of information used to generate the ranges for web visibility have been Google and ahrefs.
  • The developed interface allows us to visualize lists sorted by ranges relatives to each indicator separately, as well as the evolution of the ranges for one or several institutions (up to five).
  • The curves and bars graphs always represent ranks, that is, positions in the general lists. Therefore, the lowest values show the better positions in the list and the highest values the worst.

The SIR ARE NOT LEAGUE TABLES. The only goal of SIR is to characterize research outcomes of organizations so as to provide useful scientometric ranks to institutions, policymakers and research manager. So that they are able to analyze, evaluate and improve their research results. If someone uses this information to rank institutions or to build a league table with any purpose, he/she will do it under his/her own responsibility.

Indicators

Indicators are divided into three groups intended to reflect scientific, economic and social characteristics of institutions. It needs to be kept in mind that, once indicators are calculated the resulting values of institutions for each of the indicators have been normalized on a scale of 0 to 100. The SIR includes both, size-dependent and size-independent indicators; that is indicators influenced and not influenced by the size of the institutions. In this manner, the SIR provides overall statistics of the scientific publication and other output of institutions, at the same time that enables comparison between institutions of different sizes.

Research

  1. Output: Total number of documents published in scholarly journals indexed in Scopus (Romo-Fernández, et al., 2011). This is a size-dependent indicator.
  2. International Collaboration: Institution's output ratio produced in collaboration with foreign institutions. The values are computed by analyzing an institution's output whose affiliations include more than one country address (Guerrero-Bote, Olmeda-Gómez and Moya- Anegón, 2013; Lancho-Barrantes, Guerrero-Bote and Moya-Anegón, 2013; Lancho-Barrantes, et al., 2013; Chinchilla-Rodríguez, et al., 2012). This is a size-independent indicator.
  3. Normalized Impact: Normalized Impact of led output is computed using the methodology established by the Karolinska Intitutet in Sweden where it is named "Item oriented field normalized citation score average". The normalization of the citation values is done on an individual article level. The values (in decimal numbers) show the relationship between an institution's average scientific impact and the world average set to a score of 1, --i.e. a NI score of 0.8 means the institution is cited 20% below world average and 1.3 means the institution is cited 30% above average (Rehn and Kronman, 2008; González-Pereira, Guerrero-Bote and Moya- Anegón, 2011).This is a size-independent indicator.
  4. High Quality Publications: Ratio of publications that an institution publishes in the most influential scholarly journals of the world, those ranked in the first quartile (25%) in their categories as ordered by SCImago Journal Rank (SJRII) indicator (Miguel, Chinchilla-Rodríguez and Moya-Anegón, 2011). This is a size-independent indicator.
  5. Specialization Index: The Specialization Index indicates the extent of thematic concentration /dispersion of an institution’s scientific output. Values range between 0 and 1, indicating generalist vs. specialized institutions respectively. This indicator is computed according to the Gini Index used in Economy (Moed, et. al., 2011; López-Illescas, Moya-Anegón and Moed, 2011; Arencibia-Jorge et al., 2012). In this indicator, when the value is 0 it means that the data are not sufficient to calculate. However, it should be noted that although the resulting specialization values range between 0 and 1, these values have been normalized on a scale of 0 to 100, as the rest of indicators. This indicator is size-independent.
  6. Excellence Rate: Excellence rate indicates the amount (in %) of an institution’s scientific output that is included into the set of the 10% of the most cited papers in their respective scientific fields. It is a measure of high quality output of research institutions (SCImago Lab, 2011; Bornmann, Moya-Anegón and Leydesdorff, 2012; Guerrero-Bote and Moya-Anegón, 2012). This is a size-independent indicator.
  7. Scientific Leadership: Leadership indicates the percentage of an institution’s output as main contributor, that is, the amount of papers in which the corresponding author belongs to the institution (Moya-Anegón, 2012; Moya-Anegón et. al, 2013; Moya-Anegón, et al.,). This is a size-independent indicator.
  8. Excellence with Leadership: Excellence with Leadership indicates the amount of documents in the Excellence rate in which the institution is the main contributor (Moya-Anegón, et al., 2013). This is a size-independent indicator.
  9. Scientific talent pool: Total number of authors from an institution in the total publication output of that institution during a particular period of time. This indicator is size-dependent.

Innovation

  1. Innovative Knowledge: Scientific publication output from an institution cited in patents. Based on PATSTAT (http://www.epo.org). This indicator is size-dependent.
  2. Technological Impact: Percentage of the scientific publication output cited in patents in the total output of the institution. Based on PATSTAT (http://www.epo.org). This indicator is size-independent

Web

  1. Web size: Number of pages associated to the institution URL according to Google (https://www.google.com). This indicator is size-dependent.
  2. Domain’s inbound links: Number of incoming links to an institution domain according to ahrefs (https://ahrefs.com). This indicator is size-dependent.

Bibliography

  • Arencibia-Jorge, R., Vega-Almeida, R. L., Chinchilla-Rodríguez, Z., Corera-Álvarez, E., Moya-Anegón, F. (2012) Patrones de especialización de la investigación nacional sobre Salud”. Revista Cubana de Salud Pública 38 (5). http://dx.doi.org/10.1590/S0864-34662012000500007
  • Bornmann, L., De Moya Anegón, F., Leydesdorff, L. (2012) The new Excellence Indicator in the World Report of the SCImago Institutions Rankings 2011. Journal of Informetrics, 6 (2), pp. 333-335. http://dx.doi.org/10.1016/j.joi.2011.11.006
  • Chinchilla-Rodríguez, Z., Benavent-Pérez, M., Miguel, S., Moya-Anegón, F. (2012) “International Collaboration in Medical Research in Latin America and the Caribbean (2003-2007)”. Journal of the American Society for Information Science and Technology 63 (11), pp. 2223-2238. http://dx.doi.org/10.1002/asi.22669
  • González-Pereira, B., Guerrero-Bote,V., Moya-Anegón, F. (2010). A new approach to the metric of journal’s scientific prestige: The SJR indicator. Journal of Informetrics, 4(3), pp. 379–391. http://dx.doi.org/10.1016/j.joi.2010.03.002
  • Guerrero-Bote, V.P., Moya-Anegón, F. (2012) A further step forward in measuring journals' scientific prestige: The SJR2 indicator. Journal of Informetrics, 6 (4), pp. 674-688. http://dx.doi.org/10.1016/j.joi.2012.07.001
  • Guerrero Bote, V.P., Olmeda-Gomez, C., De Moya-Anegon, F. (2013) Quantifying the benefits of international scientific collaboration. Journal of the American Society for Information Science and Technology, 64 (2), pp. 392-404. http://dx.doi.org/10.1002/asi.22754
  • Lancho-Barrantes, B.S., Guerrero-Bote, V.P., de Moya-Anegón, F. (2013) Citation increments between collaborating countries. Scientometrics, 94 (3), pp. 817-831. http://dx.doi.org/1002/asi.22754
  • Lancho-Barrantes, B. S., Guerrero-Bote, V. P., Chinchilla-Rodríguez, Z., Moya-Anegón, F. (2012) Citation Flows in the Zones of Influence of Scientific Collaborations. Journal of the American Society for Information Science and Technology 63 (3), pp. 481-489. http://dx.doi.org/10.1002/asi.21682
  • Lopez-Illescas, C., de Moya-Anegón, F., Moed, H.F. (2011) A ranking of universities should account for differences in their disciplinary specialization. Scientometrics, 88 (2), pp. 563-574. http://dx.doi.org/10.1007/s11192-011-0398-6
  • Miguel, S., Chinchilla-Rodríguez, Z., Moya-Anegón, F. (2011) Open Access and Scopus: A New Approach to Scientific From the Standpoint of Access. Journal of the American Society for Information Science and Technology, 62 (6), pp. 1130-1145. http://dx.doi.org/ 10.1002/asi.21532
  • Moya-Anegón, F., Chinchilla-Rodríguez, Z., Vargas-Quesada, B., Corera-Álvarez, E., González-Molina, A., Muñoz-Fernández, F. J., Herrero-Solana, V. (2007) Coverage analysis of SCOPUS: a journal metric approach. Scientometrics 73 (1), pp. 57-58. http://dx.doi.org/ 10.1007/s11192-007-1681-4
  • Moed, H.F., Moya-Anegón, F., López-Illescas, C., Visser, M. (2011). Is concentration of university research associated with better research performance? Journal of Informetrics. 5 (4) 649-658. http://dx.doi.org/10.1016/j.joi.2011.06.003
  • Moya-Anegón, F. Liderazgo y excelencia de la ciencia española (2012) Profesional de la Información, 21 (2), pp. 125-128. http://dx.doi.org/10.3145/epi.2012.mar.01