Journal and Article Metrics

Print

Background
Definitions
Tools to Locate Journal & Article Metrics
Selected Readings

Background:

Journal metrics are used to identify key journals in a research field.  This identification may be most useful to scholars who are compiling a current reading list or who are locating journals in which to submit future publications.

The Impact Factor may be the most familiar metric in academia. Eugene Garfield of Thomson Scientific first introduced this idea in the 1950s. Impact Factor calculations are now available through Journal Citation Reports (JCR).

Additional metrics such as the article-level metrics for Public Library of Science (PLoS) publications are also available. In spite of their merits, journal metrics can be misused particularly in the evaluation of individual authors.

Definitions:

Impact Factor: “the average number of times articles from the journal published in the past two years have been cited in the JCR year.” 

  • Calculation: # of citations to a journal in a JCR year e.g. 2008 / # of articles published in the journal during the previous 2 years
  • note that articles do not normally include editorials, letters, news items, meeting abstracts etc.
  • may include self cites
  • under the Journal Self Cited section, JCR does provide an Impact Factor calculation without self cites
  • information from “Impact Factor” Journal Citation Reports

5-Year Journal Impact Factor: “the average number of times articles from the journal published in the past five years have been cited in the JCR year.”

  • Available from 2007 forward
  • 5-year Impact Factor Trend Graphs are available
  • information from “Impact Factor” Journal Citation Reports 

Aggregate Impact Factor: The average number of times articles from a subject category published in the past two years that have been cited in the JCR year.

Journal Immediacy Index: “indicates how quickly articles in a journal are cited”

  • Calculation: # of citation to a articles in a journal in the last year / # of articles published in the journal during the same year
  • frequently issued journals could have an advantages or publications that are available earlier in the year
  • Information from “Immediacy Index” Journal Citation Reports

Journal Cited Half-Life: “the number of publication years from the current year which account for 50% of current citations received.”

  • If the cited half-life is 7.0 than 50% of the citations to the journal in the current year have been accumulated in the last 7 years.
  • Changes are most likely to “indicate difference in format and publication history” and are less useful for assigning value to a journal
  • Information from “Cited-Half Life” Journal Citation Reports

 Related Journals: “journal’s degree of relatedness to other journals, based on citation information”

eigenFACTORTM score: “measures the number of times articles from the journal published in the past five years have been cited in the JCR year.”  Differs from Impact factor in the following ways

  • Includes citations from sciences and social sciences
  • Eliminate self-citations
  • “Weights each reference according to a stochastic measure of the amount of time researchers spend reading the journal”
  • Information from “Eigenfactor Metrics” Journal Citation Reports

Article InfluenceTM score: “a measure of a journal’s prestige based on per article citations and comparable to Impact Factor.”

  • Calculation: Eigenfactor Score / # of articles published in the journal (fractions are normalized)
  • “The mean Article Influence Score is 1.00. A score greater than 1.00 indicates that each article in the journal has above-average influence.”
  • Information from “Eigenfactor Metrics” Journal Citation Reports

Tools to Locate Journal and Article Metrics:

  Journal Citation Reports (JCR) (Impact Factor)

  • is produced by Thomson Reuters
  • The purpose of JCR is to evaluate journals “with quantifiable, statistical information based on citation data” and to show the “relationship between citing and cited journals”. About JCR.
  • The following data points are available in JCR: Impact Factor, Immediacy Index, Total Cites, Total Articles, Cited Half-Life, Journal Title, 5-year Impact Factor trend graph, Eigenfactor Score, Article Influence Scores.
  • Both a Sciences (7,200+ journals, ~175 subject categories) and Social Sciences edition (2,100+ journals and ~55 subject categories) are available.
  • JCR data is also available through Web of Science.

 eigenFACTOR.org (Eigenfactor ScoreTM / Article Influence ScoreTM)

  • The Eigenfactor Project is an “academic research project sponsored by the Bergstrom lab in the Department of Biology at the University of Washington”.
  • The aim of the project is “to develop novel methods for evaluating the influence of scholarly periodicals and for mapping the structure of academic research”. About eigenFACTOR.org. 
  • Data used in Eigenfactor calculations is provided by Thomson Reuters.  Eigenfactor and Article Influence Scores are also visible in JCR.
  • The most recent years available on the web site is 2007.
  • Eigenfactor uses network theory and a modified eigenvector centrality algorithm to calculate journal influence. More about Eigenfactor methods

PLoS Publications (Article-Level Metrics)

  • A nonprofit organization founded in 2000 by biomedical scientists which makes scientific and medial literature publically available
  • In 2003, it started an open access scientific and medical publishing arm with high-quality journals.
  • In March 2009, PLoS introduced article-level metrics to provide a measure of merit for individual articles rather than merely journal metrics. 
  • Article-level metrics are based on citation information and online usage data from social bookmarks, comments & notes, blog coverage, and star ratings. 
  • "Article-level Metrics" by Cameron Neylon (video), discusses how to contribute to the evaluation of a paper's relevance or quality.

Selected Readings:

Adler, Robert; Ewing, John; and Taylor, Peter. (June 11, 2008) Citation Statistics. A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS). Available online: http://www.mathunion.org/fileadmin/IMU/Report/CitationStatistics.pdf

  • The authors report on "the use and misuse of citation data in the assessment of scientific research." They contend that, "There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded."

Bergstrom, C. T, West, J.D. and Wiseman, M.A. (5 Nov. 2008). The Eigenfactor Metrics. The Journal of Neuroscience, 28(45): 11433 - 11434. Reprint. http://www.jneurosci.org/cgi/reprint/28/45/11433.pdf

  • The authors clearly state from the outset that "Quantitative metrics are poor choices for assessing the research output of an individual scholar" and posit that "reading the scholar's publications and talking to experts about her work" is a better method (11433).  However, bibliometric statistics can be useful in answering a variety of other questions.  Bergstrom et. al explain the Eigenfactor Metric and claim that it is a "more sophisticated way of looking at citation data" (11433).

Bollen, J, Van de Sompel, H., Hagber, A. and Chute, R.  (29 June 2009). A Principal Component Analysis of 39 Scientific Impact Measures. PLoS One, 4(6), doi:10.1371/journal.pone.0006022
View abstract and access article

  • The authors recognize the predominance of JIF (Thomas Scientific Journal Impact Factor) and note its shortcomings.  They also outline other proposed or practical methods to measure scientific impact.  In their study they use 39 different measures of scholarly impact (including citation and usage data) from JCR, MESUR, and Scimago/Scopus to complete a Principal Component Analysis (PCA) and propose that their results provide "the largest and most thorough survey of usage - and citation based measures of scientific impact" though they offer suggestions for further measures in future studies.

Chin, C.Y., Aris, M.J., Chen, X. (2 Dec. 2009).  Combination of Eigenfactor and H-Index to Evaluate Scientific Journals. Scientometrics Online First: doi: 10.1007/s11192-009-0116-9
http://springerlink.metapress.com/content/gm5625310jh89308/fulltext.pdf

  • In the introduction, the authors give a succinct overview of established and newer journal metrics.  Through their study, the authors tabulate the h-index and Eigenfactor values for engineering/scientific journals and propose that this combination may provide an alternative in scientific journal evaluation. 

Falagas, Matthew E. and Alexiou, Vangelis G. “The top-ten in journal impact factor manipulation.” Archivum Immunologiae et Therapiae Experimentalis.  2008, 56, 223–226.
http://www.springerlink.com/content/j6524480v8g00884/

  • The authors examine various ways in which publishers and authors manipulate journal impact factors to their own advantage.

Garfield, E. (2006). The History and Meaning of the Journal Impact Factor. JAMA, 295(1), 90-93 DOI: 10.1001/jama.295.1.90
http://jama.ama-assn.org/cgi/reprint/295/1/90

  • Garfield, the creator of the impact factor, answers some of the concerns voiced by impact factor critics and supports the value of citation analysis.  However, he does acknowledge that the use of journal performance indicators (JIFs) to evaluate authors as "highly controversial" (92) and warns again substituting an journal's impact factor "for the actual citation count" (92).  In the conclusion Garfield summarizes his view of the impact factor with a quote by Hoeffel.

Garfield, E. (June 1998). The Impact Factor and Using It Correctly. Der Unfallchirurg, 48(2): 413. http://www.garfield.library.upenn.edu/papers/derunfallchirurg_v101%286%29p413y1998english.html

  • Garfield answers criticism regarding the validity of using impact factor to judge "scientific achievement in trauma surgery" (413).  His response asserts that concerns regarding self citations, discrimination against certain journals, and that lack of citations from non-English language journals are unsubstantiated claims which do not significantly alter the impact factor or make it suspect.   Garfield promotes using impact factor appropriate and notes that "[t]he sources of much anxiety about Journal Impact Actors comes from  their misuse in evaluating individuals" (Ibid).

Gunzburg, R. Szpalski, M. and Aebi, M. (Aug. 2002). The Impact Factor: Publish, Be Cited or Perish ... European Spine Journal 11(Supplement 1): S1.  Editorial.
http://springerlink.metapress.com/content/y78c28gfdmt32y1x/fulltext.pdf

  • The authors support viewing journal impact factors to determine the most significant scientific journals to read.  They review limitations of the JCR (Journal Citation Report tool) and forward that "[t]he citation of an article does not reflect the quality of the paper but rather the interest or the fashion effect of its topic" (11).  In spite of impact factor shortcomings, the authors note its popularity in academia and take the opportunity to note the high impact factor of the European Spine Journal.

Hirsch J.E. (15 Nov. 2005). An index to quantify an individual’s scientific research output. PNAS 102(46): 16569-16572) doi: 10.1073/pnas.0507655102
http://www.pnas.org/content/102/46/16569

  • Hirsch proposes the index h as a suitable index to calculate a researcher’s scientific output.  He listed advantages and disadvantages of specific single-number criteria.  According to Hirsch the calculation can be based on using “times cited” information from Web of Science.  The author suggests ranking groups or departments by the overall h index to determine graduate programs of interest (16572). 

Monastersky, R.  (14 Oct. 2005). The Number That's Devouring Science. The Chronicle of Higher Education. http://chronicle.com/article/The-Number-That-s-Devouring/26481

  • Monastersky incorporates quotes from numerous individuals to argue that impact factor should not be used to evaluate individuals or to garner tenure etc.  However, he also uncovers the problems and consequences associated with the misuse of impact factor calculations. 

** Neylon, C and Wu, S. (17 Nov. 2009). Article-Level Metrics and the Evolution of Scientific Impact. PLoS Biology, 7(11): 1-6. doi:10.1371/journal.pbio.1000242
http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1000242

  • The authors consider impact factor to be "deeply flawed both in principle and in practice as a tool for filtering the literature" (1).  They are especially critical of the time lag between from the time of article publication to the appearance of citations from that article.  They propose new metrics including rating from reader comments, bookmarking, blog coverage, type of page views etc. through they are well aware of historic resistance and lack of success for commenting systems.  In keeping with many other authors, they stress "that journal impact factor is a very poor measure of article impact" (6).    

PLoS (Public Library of Science). (2009). Article-Level Metrics. Retrieved  8 Jan. 2010. http://article-level-metrics.plos.org/

  • This short introduction to article-level metrics from PLoS is divided into four sections: Research in context, Interpreting the data, Help us evaluate and the data, and videos.  This endeavor began in March 2009 and PLoS's goal is to reveal the merits of individual articles.  Article-level metrics are based on citation finformation and online usage data from social bookmarks, comments & notes, blog coverage, and star ratings.  Known issues with article-level metrics are also noted.  http://www.plosone.org/static/almInfo.action

Wilson, Alan E. “Journal Impact Factors Are Inflated” BioScience 57(7): 550-551. 2007. http://www.bioone.org/doi/pdf/10.1641/B570702

  • The author argues that journal impact factors are unreliable because of inherent bias and recommends that anyone wishing to evaluate the quality of a scholarly journal use other measures, inculding the Eigen Factor. 

Wróblewski, A.K. (Dec. 2008).  A Commentary on Misuses of the Impact Factor. Archivum Immunologiae et Therpiae Experimentalis 56(6): 355-35. doi: 10.1007/s00005-008-0038-x http://springerlink.metapress.com/content/5n8330h443271xt1/fulltext.pdf

  • The author reviews Garfield's warning of impact factor misuse and reiterates his own concern that funding agencies and academic institutions use IF to judge scientists.  Yet, it is possible that an article in a low impact journal may receive more citations than an article in a high impact journal.  

 


© UC Merced Library | 5200 North Lake Rd. Merced CA | 209-228-4444