Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

Journal Metrics: Bibliography

This guide describes what the journal impact factor (JIF or IF) is, criticisms of IF, how to use IF responsibly, and other journal IFs being developed by other organizations.

Bibliography

Alexandrov, G. A. (2011). "The meaning of the 'impact factor' in the case of an open-access journal." Carbon Balance Manag 6(1): 1.

            The dominant model of journal evaluation emerged at the time when there were no open-access journals, and nobody has assessed yet whether this model is able to cope with this modern reality. This commentary attempts to fill the gaps in the common understanding of the role that 'impact factor' should play in evaluation of open-access journals.

 

Arfan ul, B. (2008). "Journal impact factor: still an enigma." J Coll Physicians Surg Pak 18(7): 458-459.

           

Benner, R. S. (2012). "Evaluating the importance of a journal: the impact factor and other metrics." Obstet Gynecol 119(1): 3-4.

           

Bornmann, L., W. Marx, et al. (2011). "Diversity, value and limitations of the journal impact factor and alternative metrics." Rheumatol Int.

            The highly popular journal impact factor (JIF) is an average measure of citations within 1 year after the publication of a journal as a whole within the two preceding years. It is widely used as a proxy of a journal's quality and scientific prestige. This article discusses misuses of JIF to assess impact of separate journal articles and the effect of several manuscript versions on JIF. It also presents some newer alternative journal metrics such as SCImago Journal Rank and the h-index and analyses examples of their application in several subject categories.

 

Chattopadhyay, A. (2009). "How useful is journal impact factor?" Indian J Dent Res 20(2): 246-248.

           

Dempsey, J. A. (2009). "Impact factor and its role in academic promotion: a statement adopted by the International Respiratory Journal Editors Roundtable." J Appl Physiol 107(4): 1005.

           

Diamandis, E. P. (2009). "Journal Impact Factor: it will go away soon." Clin Chem Lab Med 47(11): 1317-1318.

           

Favaloro, E. J. (2009). "The Journal Impact Factor: don't expect its demise any time soon." Clin Chem Lab Med 47(11): 1319-1324.

            Much emphasis continues to be placed on the Journal Impact Factor (IF), a measure of journal article citation rates, and typically used as a surrogate marker of quality of both the article and journal. The IF is both revered and reviled, and is neither a perfect nor comprehensive measure, having several limitations and being subject to easy manipulation. The IF holds 'power' for journals because it can influence their future success. Furthermore, the perceived utility of the IF has grown way beyond that of its original and still popular use as a surrogate marker of publication 'quality'. The IF is increasingly being used (i) to objectively evaluate the scientific and academic value of scientists across a wide variety of disciplines, (ii) to short-list research projects for future financial support, (iii) to short-list or select applicants for academic promotion, and (iv) by researchers to measure the success of research institutes, research funding, or even entire countries. Accordingly, despite our love-hate relationship with the IF, don't expect its demise any time soon.

 

Grzybowski, A. (2009). "The journal impact factor: how to interpret its true value and importance." Med Sci Monit 15(2): SR1-4.

            In 1955, Garfield suggested that the number of references could be used to measure the "impact" of a journal, but the term "impact factor" was introduced in 1963 by Garfield and Sher. The primary goal of impact factor analysis was to improve the management of library journal collections. Single-parameter measurements of the quality of a journal article have become increasingly popular as a substitute for scientific quality. The simplicity of its counting system and convenience of its use are significant benefits. Probably for these reasons, funding bodies, academic authorities, and some governments began using the impact factor to guide decisions about allocating grants, awarding appointments and academic degrees, and defining scientific policy. The journal impact factor, which is often recognized as a symbol of scientific prestige and relevance, can be, however, greatly influenced by the type of medical article (review vs original work), clinical specialty, and research. The true value and implications of the journal impact factor (JIF) are important to understand. It is critical to remember that JIF can be used only to evaluate journals. All comparisons should include only journals and never individuals or departments. Only similar journals (particularly those dedicated to the same scientific specialty) must be compared, because the value of the impact factor varies greatly by discipline.

 

Habibzadeh, F. (2008). "Journal impact factor: uses and misuses." Arch Iran Med 11(4): 453-454.

           

Kuroki, L. M., J. E. Allsworth, et al. (2009). "Methodology and analytic techniques used in clinical research: associations with journal impact factor." Obstet Gynecol 114(4): 877-884.

            OBJECTIVE: To describe research methodology and statistical reporting of published articles in high-impact-factor general medical journals compared with moderate-impact-factor obstetrics and gynecology journals. METHODS: A cross-sectional analysis was performed on 371 articles published from January to June 2006 in six journals (high-impact-factor group: Journal of the American Medical Association, The Lancet, the New England Journal of Medicine; moderate-impact-factor group-American Journal of Obstetrics & Gynecology, British Journal of Obstetrics and Gynaecology, and Obstetrics & Gynecology). Articles were classified by level of evidence. Data abstracted from each article included number of authors, clearly stated hypothesis, sample size/power calculations, statistical measures, and use of regression analysis. Univariable analyses were performed to evaluate differences between the high-impact-factor and moderate-impact-factor groups. RESULTS: The majority of published reports were observational studies (50%), followed by randomized controlled trials ([RCTs] 24%), case reports (14%), systematic reviews (6%), case series (1%), and other study types (4%). Within the high-impact-factor group, 35% were RCTs compared with 12% in the moderate-impact-factor group (relative risk 2.9, 95% confidence interval 1.9-4.4). Recommended statistical reporting (eg, point estimates with measures of precision) was more common in the high-impact-factor group (P<.005). CONCLUSION: The proportion of RCTs published among the high-impact-factor group was three times that of the moderate group. Efforts to provide the highest level of evidence and statistical reporting have the potential to improve the quality of reports in the medical literature available for clinical decision making. LEVEL OF EVIDENCE: III.

 

Lokker, C., R. B. Haynes, et al. (2012). "How well are journal and clinical article characteristics associated with the journal impact factor? a retrospective cohort study." J Med Libr Assoc 100(1): 28-33.

            OBJECTIVE: Journal impact factor (JIF) is often used as a measure of journal quality. A retrospective cohort study determined the ability of clinical article and journal characteristics, including appraisal measures collected at the time of publication, to predict subsequent JIFs. METHODS: Clinical research articles that passed methods quality criteria were included. Each article was rated for relevance and newsworthiness by 3 to 24 physicians from a panel of more than 4,000 practicing clinicians. The 1,267 articles (from 103 journals) were divided 60ratio40 into derivation (760 articles) and validation sets (507 articles), representing 99 and 88 journals, respectively. A multiple regression model was produced determining the association of 10 journal and article measures with the 2007 JIF. RESULTS: Four of the 10 measures were significant in the regression model: number of authors, number of databases indexing the journal, proportion of articles passing methods criteria, and mean clinical newsworthiness scores. With the number of disciplines rating the article, the 5 variables accounted for 61% of the variation in JIF (R(2) = 0.607, 95% CI 0.444 to 0.706, P<0.001). CONCLUSION: For the clinical literature, measures of scientific quality and clinical newsworthiness available at the time of publication can predict JIFs with 60% accuracy.

 

Mathur, V. P. and A. Sharma (2009). "Impact factor and other standardized measures of journal citation: a perspective." Indian J Dent Res 20(1): 81-85.

            The impact factor of journals has been widely used as glory quotients. Despite its limitations, this citation metric is widely used to reflect scientific merit and standing in one's field. Apart from the impact factor, other bibliometric indicators are also available but are not as popular among decision makers. These indicators are the immediacy index and cited half-life. The impact factor itself is affected by a wide range of sociological and statistical factors. This paper discusses the limitations of the impact factor with suggestions of how it can be used and how it should not be used. It also discusses how other bibliometric indicators can be used to assess the quality of publications.

 

McVeigh, M. E. and S. J. Mann (2009). "The journal impact factor denominator: defining citable (counted) items." JAMA 302(10): 1107-1109.

           

Nahata, M. C. (2009). "Journal impact factor: what it is and is not." Ann Pharmacother 43(1): 112-113.

           

Oh, H. C. and J. F. Lim (2009). "Is the journal impact factor a valid indicator of scientific value?" Singapore Med J 50(8): 749-751.

           

Pagel, P. S. and J. A. Hudetz (2011). "Bibliometric analysis of anaesthesia journal editorial board members: correlation between journal impact factor and the median h-index of its board members." Br J Anaesth 107(3): 357-361.

            BACKGROUND: h-index is useful for quantifying scholarly activity in medicine, but this statistic has not been extensively applied as a measure of productivity in anaesthesia. We conducted a bibliometric analysis of h-index in editorial board members and tested the hypothesis that editorial board members of anaesthesia journals with higher impact factors (IFs) have higher h-indices. METHODS: Ten of 19 journals with 2009 IF>1 were randomly chosen from Journal Citation Reports((R)). Board members were identified using each journal's website. Publications, citations, citations per publication, and h-index for each member were obtained using Scopus((R)). RESULTS: Four hundred and twenty-three individuals filled 481 anaesthesia editorial board positions. The median h-index of all editorial board members was 14. Board members published 75 papers (median) with 1006 citations and 13 citations per publication. Members serving on journals with IF greater than median had significantly (P<0.05; Wilcoxon's rank-sum test) greater median h-index, citations, and citations per publication than those at journals with IF less than median. A significant correlation between the median h-index of a journal's editorial board members and its IF (h-index=3.01xIF+6.85; r( 2)=0.452; P=0.033) was observed for the 10 journals examined. Board members of subspeciality-specific journals had bibliometric indices that were less than those at general journals. The h-index was greater in individuals serving more than one journal. European editorial board members had higher h-index values than their American colleagues. CONCLUSIONS: The results suggest that editorial board members of anaesthesia journals with higher IFs have higher h-indices.

 

Plebani, M. (2009). "The journal impact factor: navigating between Scylla and Charybdis." Clin Chem Lab Med 47(11): 1315-1316.

           

Racki, G. (2009). "Rank-normalized journal impact factor as a predictive tool." Arch Immunol Ther Exp (Warsz) 57(1): 39-43.

            Citation data accumulated on articles from the top and bottom 25 of impact factor (IF)-ranked international journals are compared using 59 international geoscience journals from 1998 and 378 Polish geological papers from 1989-1994. There is a minor risk of being un-cited when results are published in high-IF periodicals as the average non-citation rate is 0.88 over a 10-year period in this not very rapidly developing scientific discipline. Similarly, the established error levels in the prognosis of expected citation success versus failure based on the extreme IF quartiles as an evaluation tool is low (at most 12.5). Thus the application of the rank-normalized journal IF as a proxy of real citation frequency and, accordingly, as a predictive tool in the a priori qualification of recently published publications is a rational time- and cost-saving alternative (or at least a significant supplement) to traditional informed peer review. Blanket criticism of using IF for decisions in research funding is therefore at least partly exaggerated.

 

Ruiz, M. A., O. T. Greco, et al. (2009). "Journal impact factor: this editorial, academic and scientific influence." Rev Bras Cir Cardiovasc 24(3): 273-278.

            In this report the authors present information on the bibliometric instruments and their importance in measuring the quality of scientific journals and researchers. They in particular the history and deployment of the impact factor of the existing Institute for Scientific Information since 1955. Are presented and discussed the criticism regarding the inadequacy of the impact factor for evaluation of scientific production, misuse and strategies editorial handling of the bibliometric index. It is presented to the new classification CAPES for the journals, based on various criteria and the impact factor and its influence on national scientific and academic life. The authors conclude that, despite all obstacles and discussions, the impact factor of the Institute for Scientific Information is still an useful tool and the only isolation available to assess the scientific and intellectual productivity.

 

Triaridis, S. and A. Kyrgidis (2010). "Peer review and journal impact factor: the two pillars of contemporary medical publishing." Hippokratia 14(Suppl 1): 5-12.

            The appraisal of scientific quality is a particularly difficult problem. Editorial boards resort to secondary criteria including crude publication counts, journal prestige, the reputation of authors and institutions, and estimated importance and relevance of the research field, making peer review a controversial rather than a rigorous process. On this background different methods for evaluating research may become required, including citation rates and journal impact factors (IF), which are thought to be more quantitative and objective indicators, directly related to published science. The aim of this review is to go into the two pillars of contemporary medical publishing, that is the peer review process and the IF. Qualified experts' reviewing the publications appears to be the only way for the evaluation of medical publication quality. To improve and standardise the principles, procedures and criteria used in peer review evaluation is of great importance. Standardizing and improving training techniques for peer reviewers, would allow for the magnification of a journal's impact factor. This may be a very important reason that impact factor and peer review need to be analyzed simultaneously. Improving a journal's IF would be difficult without improving peer-review efficiency. Peer-reviewers need to understand the fundamental principles of contemporary medical publishing, that is peer-review and impact factors. The current supplement of the Hippokratia for supporting its seminar for reviewers will help to fulfill some of these scopes.

 

Villanueva Lopez, I. S. (2008). "[Impact factor of a biomedical journal. Is it justified to look for it?]." Acta Ortop Mex 22(3): 143-144.

           

Wu, X. F., Q. Fu, et al. (2008). "On indexing in the Web of Science and predicting journal impact factor." J Zhejiang Univ Sci B 9(7): 582-590.

            We discuss what document types account for the calculation of the journal impact factor (JIF) as published in the Journal Citation Reports (JCR). Based on a brief review of articles discussing how to predict JIFs and taking data differences between the Web of Science (WoS) and the JCR into account, we make our own predictions. Using data by cited-reference searching for Thomson Scientific's WoS, we predict 2007 impact factors (IFs) for several journals, such as Nature, Science, Learned Publishing and some Library and Information Sciences journals. Based on our colleagues' experiences we expect our predictions to be lower bounds for the official journal impact factors. We explain why it is useful to derive one's own journal impact factor.

 

 

Additional Readings

Aguillo, I. F. (2012). Is Google Scholar useful for bibliometrics? A webometric analysis. Scientometrics, In press.

Aguillo, I. F., Ortega, J. L., & Fernández, M. (2008). Webometric Ranking of World Universities: Introduction, Methodology, and Future Developments. Higher Education in Europe, 33 (2-3), 233-244. doi:10.1080/03797720802254031

Armbruster, C. (2008). Access, Usage and Citation Metrics: What Function for Digital Libraries and Repositories in Research Evaluation?  Social Science Research Network Working Paper Series. Retrieved from http://ssrn.com/abstract=1088453

 

Bar-Ilan, J., & Peritz, B. C. (2002). Informetric Theories and Methods for Exploring the Internet: An Analytical Survey of Recent Research Literature. Library Trends, 50 (3), 371-392.

 

Beel, J., & Gipp, B. (2010). Academic search engine spam and Google Scholar ’ s resilience against it. Journal of electronic publishing, 13 (3). Retrieved from http://quod.lib.umich.edu/j/jep/3336451.0013.305?rgn=main;view=fulltext

Beniger, J. R. (1986). The Control Revolution. Technological and Economic Origins of the Information Society. Cambridge. Massachusetts, and London, England: Harvard University Press.

 

Björneborn, L., & Ingwersen, P. (2001). Perspectives of webometrics. Scientometrics, 50 (1), 65-82.

 

Borgman, C. L. (2007). Scholarship in the Digital Age: Information, Infrastructure, and the Internet. The MIT Press. Retrieved from http://www.amazon.com/dp/0262026198

 

Bulger, M., Meyer, E. T., Flor, G. de la, Terras, M., Wyatt, S., Jirotka, M., Eccles, K., et al. (2011). Reinventing research? Information practices in the humanities. Network (pp. 1-83).

Butler, D. (2011). Experts question rankings of journals. Nature, 478 (7367), 20. Nature Publishing Group. doi:10.1038/478020a

Costas, R., & Leeuwen, T. N. V. (2011). Unraveling the complexity of thanking : preliminary analyses on the " Funding Acknowledgment " field of Web of Science database.  16th Nordic Workshop on Bibliometrics and Research Policy. Aalborg: Royal School of Library and Information Science. Retrieved from http://itlab.dbit.dk/~nbw2011/index.php?s=programme

 

Costas, R., van Leeuwen, T. N., & Bordons, M. (2010). A bibliometric classificatory approach for the study and assessment of research performance at the individual level: The effects of age on productivity and impact. Journal of the American Society for Information Science and Technology, 61(8), 1564-1581. doi:10.1002/asi.v61:8

 

Cronin, B., & Weaver, S. (1995). The praxis of acknowledgement: from bibliometrics to influmetrics. Revista Española de Documentación Científica, 18(2), 172-177. Retrieved from http://redc.revistas.csic.es/index.php/redc/article/view/654/729

 

Davis, P. M. (2012). Tweets, and Our Obsession with Alt Metrics. The Scholarly Kitchen. Retrieved January 8, 2012, from http://scholarlykitchen.sspnet.org/2012/01/04/tweets-and-our-obsession-with-alt-metrics/48

 

Dutton, W. H., Jeffreys, P. W., & Goldin, I. (2010). World wide research: reshaping the sciences and humanities. (William H. Dutton & P. W. Jeffreys, Eds.) (New., p. 408). The MIT Press. Retrieved from http://www.amazon.com/dp/0262513730

Eysenbach, G. (2011). Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact. Journal of Medical Internet Research, 13(4). Journal of Medical Internet Research. doi:10.2196/jmir.2012

Grimm, J., & Grimm, W. (2004). The Annotated Brothers Grimm. (M. Tatar, Ed.) (p. 416). W. W. Norton &amp; Company. Retrieved from http://www.amazon.com/Annotated-Brothers-Grimm-Books/dp/0393058484/ref=pd_sim_b_1

Groth, P., & Gurney, T. (2010). Studying Scientific Discourse on the Web using Bibliometrics: A Chemistry Blogging Case Study. Retrieved from http://de.slideshare.net/pgroth/studying-scientific-discourse-on-the-web-using-bibliometrics

Groth, P., Gibson, A., & Velterop, J. (2010). The anatomy of a nanopublication. Information Services & Use, 30, 51-56. doi:10.3233/ISU-2010-0613

Gruzd, A., Goertzen, M., & Mai, P. (2011). Survey Research Highlights: Trends in Scholarly Communication and Knowledge Dissemination in the Age of Online Social Media. Knowledge Creation Diffusion Utilization. Halifax, Canada.

Harley, D., Acord, S., Earl-Novell, S., Lawrence, S., & King, C. J. (2010). Assessing the Future Landscape of Scholarly Communication: An Exploration of Faculty Values and Needs in Seven Disciplines. Retrieved from http://escholarship.org/uc/cshe_fsc  

Hewson, C. (2003). Internet research methods: a practical guide for the social and behavioural sciences. London etc.: Sage.

Hine, C. (2005). Virtual Methods: Issues in Social Research on the Internet. Berg.

Hine, Christine. (2006). New Infrastructures for Knowledge Production. Understanding e-science. Hershey, USA: Information Science Publishing.

Hine, Christine. (2008). Systematics as Cyberscience: Computers, Change, and Continuity in Science. Cambridge, USA: MIT Press.

Howard, P. N., & Jones, S. (2004). Society Online. The Internet in Context. Thousand Oaks, London, New Delhi: Sage.

Huggett, S. (2012). F1000 Journal Rankings: an alternative way to evaluate the scientific impact of scholarly communications. Research Trends.

Jankowski, Nick (Ed.). (2009). E-Research: Transformation in Scholarly Practice (Hardback) - Routledge. Routledge. Retrieved from http://www.routledge.com/books/details/9780415990288/

KNAW. (2010). Quality assessment in the design and engineering disciplines.

KNAW. (2011). Quality indicators for research in the humanities. Humanities. 49

Li, X., Thelwall, M., & Giustini, D. (2011). Validating online reference managers for scholarly impact measurement.

Scientometrics, 1-11. Akadémiai Kiadó, co-published with Springer Science+Business Media B.V., Formerly Kluwer Academic Publishers B.V. doi:10.1007/s11192-011-0580-x

Moed, H. F. (2005). Citation analysis in research evaluation (Vol. 9). Dordrecht: Springer. Moed, H. F., & Glänzel, W. (2004).

Handbook of quantitative science and technology research : the use of publication and patent statistics in studies of S&T systems. Dordrecht etc.: Kluwer Academic Publishers.

Oppenheim, C., Cronin, B., & Atkins, H. B. (2000). Do patent citations count? (pp. 405-432). Metford, NJ: Information Today Inc. ASIS Monograph Series.

Ponte, D., & Simon, J. (2011). Scholarly Communication 2.0: Exploring Researchers’ Opinions on Web 2.0 for Scientific Knowledge Creation, Evaluation and Dissemination.

Serials Review, 37(3), 149-156. Elsevier Inc. doi:10.1016/j.serrev.2011.06.002

Priem, J., & Hemminger, B. H. (2010). Scientometrics 2.0: New metrics of scholarly impact on the social Web.

First Monday, 15(7). Retrieved from http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2874

Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2010). altmetrics: a manifesto – altmetrics.org. Retrieved January 8, 2012, from http://altmetrics.org/manifesto/

Roosendaal, H. E., Zalewska-Kurek, K., & Geurts, P. A. T. M. (2010).

Scientific Publishing: From Vanity to Strategy (p. 190). Chandos Publishing (Oxford) Ltd. Retrieved from http://www.amazon.co.uk/Scientific-Publishing-Strategy-Hans-Roosendaal/dp/1843344904

Rousseau, R. (1998). Sitations: an exploratory study. Cybermetrics, 1(1), 1.

Smith, R. (2006). Peer review: a flawed process at the heart of science and journals. Journal of the Royal Society of Medicine, 99, 178-182.

Spaapen, J., & van Drooge, L. (2011). Introducing "productive interactions" in social impact assessment. Research Evaluation, 20(3), 211-218. doi:Article

Tatum, C., & Jankowski, N. (2012). Beyond Open Access: a Framework for Openness in Scholarly Communication. In Paul Wouters, A. Beaulieu, A. Scharnhorst, & S. Wyatt (Eds.), Virtual Knowledge. MIT Press. The Virtual Knowledge Studio:, Wouters, P., Vann, K., Scharnhorst, A., Ratto, M., Hellsten, I., Fry, J., et al. (2008). Messy Shapes of Knowledge-STS Explores Informatization, New Media, and Academic Work (Vol. 3, pp. 319-351; 14). Cambridge, Mass: MIT Press.

Thelwall, Mike. (2005). Link Analysis: An Information Science Approach. San Diego: Academic Press.

Thelwall, M., Wouters, P., & Fry, J. (2008). Information-centered research for large-scale analyses of new information sources. Journal of the American Society for Information Science and Technology, 59(9), 1523-1527.

Van Noorden, R. (2010). Metrics: A profusion of measures. Nature, 465(7300), 864-6. Nature Publishing Group. Retrieved from http://www.nature.com/news/2010/100616/full/465864a.html

Van Raan, A. (Ed.). (1988). Handbook of Quantitative Studies of Science and Technology. Amsterdam: Elsevier Science Publishers. 50

van Raan, A. F. J. (2004). Sleeping Beauties in science. Scientometrics, 59(3), 461-466.

 

Weller, K., & Puschmann, C. (2011). Twitter for Scientific Communication: How Can Citations/References be Identified and Measured?  Proceedings of the ACM WebSci’11. Retrieved from http://journal.webscience.org/500/

Williams, R., Pryor, G., Bruce, A., Macdonald, S., Marsden, W., Calvert, J., Dozier, M., et al. (2009). Patterns of information use and exchange : case studies of researchers in the life sciences (pp. 1-56).

Willinsky, J. (2006). The Access Principle: The Case for Open Access to Research and Scholarship. The MIT Press. Retrieved from http://www.amazon.com/dp/0262232421

Wouters, Paul, Bar-Ilan, J., Thelwall, M., Aguillo, I. F., Must, Ü., Havemann, F., Kretschmer, H., et al. (2010). Academic Careers Understood through Measurement and Norms (ACUMEN) (pp. 1-39).

Wouters, Paul, Beaulieu, A., Scharnhorst, A., & Wyatt, S. (2012). Virtual Knowledge. The MIT Press.

van Raan, A. F. J. (2004). Sleeping Beauties in science. Scientometrics, 59(3), 461-466.

© UAB Libraries ι University of Alabama at Birmingham ι About Us ιContact Us ι Disclaimer