Skip to main content
  • ASM
    • Antimicrobial Agents and Chemotherapy
    • Applied and Environmental Microbiology
    • Clinical Microbiology Reviews
    • Clinical and Vaccine Immunology
    • EcoSal Plus
    • Eukaryotic Cell
    • Infection and Immunity
    • Journal of Bacteriology
    • Journal of Clinical Microbiology
    • Journal of Microbiology & Biology Education
    • Journal of Virology
    • mBio
    • Microbiology and Molecular Biology Reviews
    • Microbiology Resource Announcements
    • Microbiology Spectrum
    • Molecular and Cellular Biology
    • mSphere
    • mSystems
  • Log in
  • My alerts
  • My Cart

Main menu

  • Home
  • Articles
    • Latest Articles
    • COVID-19 Special Collection
    • Archive
    • Minireviews
  • Topics
    • Applied and Environmental Science
    • Clinical Science and Epidemiology
    • Ecological and Evolutionary Science
    • Host-Microbe Biology
    • Molecular Biology and Physiology
    • Therapeutics and Prevention
  • For Authors
    • Submit a Manuscript
    • Scope
    • Editorial Policy
    • Submission, Review, & Publication Processes
    • Organization and Format
    • Errata, Author Corrections, Retractions
    • Illustrations and Tables
    • Nomenclature
    • Abbreviations and Conventions
    • Publication Fees
    • Ethics Resources and Policies
  • About the Journal
    • About mBio
    • Editor in Chief
    • Board of Editors
    • AAM Fellows
    • For Reviewers
    • For the Media
    • For Librarians
    • For Advertisers
    • Alerts
    • RSS
    • FAQ
  • ASM
    • Antimicrobial Agents and Chemotherapy
    • Applied and Environmental Microbiology
    • Clinical Microbiology Reviews
    • Clinical and Vaccine Immunology
    • EcoSal Plus
    • Eukaryotic Cell
    • Infection and Immunity
    • Journal of Bacteriology
    • Journal of Clinical Microbiology
    • Journal of Microbiology & Biology Education
    • Journal of Virology
    • mBio
    • Microbiology and Molecular Biology Reviews
    • Microbiology Resource Announcements
    • Microbiology Spectrum
    • Molecular and Cellular Biology
    • mSphere
    • mSystems

User menu

  • Log in
  • My alerts
  • My Cart

Search

  • Advanced search
mBio
publisher-logosite-logo

Advanced Search

  • Home
  • Articles
    • Latest Articles
    • COVID-19 Special Collection
    • Archive
    • Minireviews
  • Topics
    • Applied and Environmental Science
    • Clinical Science and Epidemiology
    • Ecological and Evolutionary Science
    • Host-Microbe Biology
    • Molecular Biology and Physiology
    • Therapeutics and Prevention
  • For Authors
    • Submit a Manuscript
    • Scope
    • Editorial Policy
    • Submission, Review, & Publication Processes
    • Organization and Format
    • Errata, Author Corrections, Retractions
    • Illustrations and Tables
    • Nomenclature
    • Abbreviations and Conventions
    • Publication Fees
    • Ethics Resources and Policies
  • About the Journal
    • About mBio
    • Editor in Chief
    • Board of Editors
    • AAM Fellows
    • For Reviewers
    • For the Media
    • For Librarians
    • For Advertisers
    • Alerts
    • RSS
    • FAQ
Perspective

Causes for the Persistence of Impact Factor Mania

Arturo Casadevall, Ferric C. Fang
Arturo Casadevall
aDepartments of Microbiology & Immunology and Medicine, Albert Einstein College of Medicine, Bronx, New York, USA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ferric C. Fang
bDepartments of Laboratory Medicine and Microbiology, University of Washington School of Medicine, Seattle, Washington, USA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
DOI: 10.1128/mBio.00064-14
  • Article
  • Info & Metrics
  • PDF
Loading

This article has a correction. Please see:

  • Causes for the Persistence of Impact Factor Mania - June 03, 2014

ABSTRACT

Numerous essays have addressed the misuse of the journal impact factor for judging the value of science, but the practice continues, primarily as a result of the actions of scientists themselves. This seemingly irrational behavior is referred to as “impact factor mania.” Although the literature on the impact factor is extensive, little has been written on the underlying causes of impact factor mania. In this perspective, we consider the reasons for the persistence of impact factor mania and its pernicious effects on science. We conclude that impact factor mania persists because it confers significant benefits to individual scientists and journals. Impact factor mania is a variation of the economic theory known as the “tragedy of the commons,” in which scientists act rationally in their own self-interests despite the detrimental consequences of their actions on the overall scientific enterprise. Various measures to reduce the influence of the impact factor are considered.

IMPORTANCE Science and scientists are currently afflicted by an epidemic of mania manifested by associating the value of research with the journal where the work is published rather than the content of the work itself. The mania is causing profound distortions in the way science is done that are deleterious to the overall scientific enterprise. In this essay, we consider the forces responsible for the persistence of the mania and conclude that it is maintained because it disproportionately benefits elements of the scientific enterprise, including certain well-established scientists, journals, and administrative interests. Our essay suggests steps that can be taken to deal with this debilitating and destructive epidemic.

The views expressed in this article do not necessarily reflect the views of the journal or of ASM.

INTRODUCTION

The journal “impact factor” was conceived by Eugene Garfield in 1955 to help librarians identify the most influential journals based on the number of citations, and the first ranking of journals by impact factor was published in 1972 (1). Today the value of a scientific publication is increasingly judged by the impact factor of the journal in which it is published, and this in turn influences the ability of scientists to be appointed, promoted, and successfully funded. It is not immediately obvious why scientists as a group would embrace the impact factor of a journal as an indicator of the importance of a publication and, by extension, the quality of individual scientists and their work. It has been suggested that the seemingly irrational focus directed on the impact factor amounts to “impact factor mania” (2). Is this an accurate diagnosis? “Mania” is defined as “an excessively intense enthusiasm, interest, or desire; a craze” (3). Evidence of an excessively intense enthusiasm and desire is apparent in the fixation of many scientists to publish their work in journals with the highest possible impact factor. Evidence of interest is apparent in a search of the PubMed database using the keywords “impact factor” and “2013,” which reveals dozens of publications in the past year alone. Evidence of a craze is perhaps more elusive, but after considering that the definition of “craze” is “to cause to become mentally deranged or obsessed; make insane,” we note some trends consistent with the definition. Although the term “mentally deranged” is more applicable to individuals than to a field, some manifestations of field behavior are consistent with insanity. Obsession is evident in the behavior of certain scientists to shop for high-impact journals (4). Therefore, we conclude that according to this definition, the life sciences are arguably in a state of mania with regard to the journal impact factor. The analogy of this behavior to disease has been made by others, who have referred to the same phenomenon as “journal mania” (5) and “impactitis” (6).

CAUSES OF IMPACT FACTOR MANIA

We subscribe to an economic view of human behavior in which choices are made in response to incentives. Accordingly, there must be compelling reasons for scientists to engage in the unscientific behavior of linking the quality of science with publication venue. There are aspects of the current scientific enterprise that can at least render the apparently illogical behavior of impact factor mania understandable, if not completely rational.

Hypercompetition for funding and jobs.In the United States, success rates for grant applications are at historic lows, and the imbalance between applicants and academic positions for scientists has reached a crisis point (7–9). Review panels are routinely asked to discriminate between grant applications that seem equally meritorious, and search or promotion committees likewise must decide which of many highly qualified candidates are most deserving of hire or professional advancement. In this environment, quantitative bibliometric tools seem to offer an objective, measurable way to ascertain researcher performance (10).

Paucity of objective measures of the importance of scientific work.There are presently more than 25,000 journals and over one million new articles published each year. The reliance of scientists and granting agencies on the impact factor as a measure of scientific quality is rooted in the need for quantitative measures of importance in science (11). In this context, the impact factor of the journal where work is published is often used a surrogate marker of excellence, despite the fact that citation frequency for the journal does not predict the citation frequency for individual papers (12–14). Although we have proposed a scheme to assess the importance of scientific work (11), our approach remains subjective and does not readily lend itself to quantitative analysis. Hence, impact factor mania is driven by the absence of other readily available parameters to judge the importance of scientific articles.

Hyperspecialization of science.As science has succeeded as an enterprise, it has become ever more specialized (15). The increasing specialization of science has made it increasingly difficult for scientists working in different fields to understand each other’s work. Relying on publication in highly selective journals as a surrogate measure of quality provides a convenient, if intellectually lazy, alternative to attempting to read and understand a paper outside one’s specialty. At least one author has attributed impact factor mania to laziness on the part of senior scientists and bureaucrats (5).

Benefits to selected journals.The high-impact journals produce a highly desirable commodity when a paper is accepted for publication. These journals create a scarcity of publication slots by rejecting the overwhelming majority of submissions, resulting in a monopoly that restricts access and corners the market for excellent articles. This in turn creates a sense of exclusivity that encourages more submissions, as scientists equate low acceptance rates with greater merit (16). Thus, high-impact journals are perversely rewarded for rejecting manuscripts. A central error made by many scientists is in assuming that exclusivity is an indicator of exceptional quality, an assumption that is not always justified (17). Nevertheless, the impact factor of a journal is a better predictor of subsequent citation frequency than article quality (18). Nature, Science, and Cell collectively account for 24% of the 2,100 most-cited articles across all scientific fields (19).

Benefits to scientists.Publication in prestigious journals has a disproportionately high payoff that translates into a greater likelihood of academic success. This has been referred to as acceptance to “The Golden Club,” with rewards in the form of jobs, grants, and visibility (20). There is also a tendency for successful publishing in highly selective journals to beget more success, perhaps in part because greater visibility attracts capable and ambitious trainees and also because editors and reviewers may apply more liberal standards in assessing submissions from authors who publish regularly in such venues (21). This helps to perpetuate the cycle. The tendency for the rich to get richer and the poor to get poorer was designated the “Matthew effect” by the sociologist Robert K. Merton (22). The economic implications of the Matthew effect imply that prominent scientists have an interest in the perpetuation of impact factor mania.

National endorsements.Some nations have developed schemes to rate the productivity of their scientists, depending on the impact factor of the journals in which their papers are published. Brazil has established a “Qualis” scale based on the average impact factor of their publications, which is used to grade students and faculty (23). China has been reported to offer monetary rewards to editors who increase the impact factors of their journals (24), and China, Turkey, and South Korea offer cash bonuses to scientists who publish in journals with high impact factors (25).

Prestige by association.The cachet of the most highly selective journals is readily transferred to its contents. Although one may resent the disproportionate influence of the most prestigious journals, one cannot deny that many important and high-quality research articles are published there, and articles are judged by the company they keep. Of course, there are also many articles published in less prestigious venues of equal or greater quality, and these may be unfairly neglected.

We conclude that impact factor mania persists because it is useful to certain scientists, certain journals, and the bureaucracy of science, particularly in certain nations. However, benefits to individuals and special interest groups do not necessarily translate into an overall benefit for the scientific enterprise.

PROBLEMS WITH IMPACT FACTOR MANIA

We submit that the current impact factor mania, whether applied to individual researchers, publications, or grants, is seriously misguided and exerts an increasingly detrimental influence on the scientific enterprise. In particular, we would emphasize the following concerns.

Distortions in the scientific enterprise.The greatest distortion caused by impact factor mania is the conflation of paper quality with publication venue rather than the actual content of the paper. This encourages the branding of science and scientists with journals in which work is published. The likelihood of obtaining funding, academic promotion, selection for awards, and election to honorific societies becomes dependent, at least in part, on publication venue. The distorted value system has become self-sustaining, with the editors of high impact journals commanding far great power and influence than is healthy for the scientific enterprise.

The full impact of a scientific discovery may not be apparent for many years.The journal impact factor is calculated by dividing the number of total citations over the number of articles published in the previous 2 years. However, the impact of truly important and novel findings often takes more than 2 years to be fully realized (26). Thus, the central premise of measuring the importance of scientific journals or papers on the basis of impact factor is fundamentally flawed. An unanticipated consequence of the way the impact factor is calculated is that the most novel and innovative research may be less attractive to some journals because such work, by its very nature, will have its major influence at a time when it does not contribute to the calculation. Journals that reject manuscripts because they cannot immediately appreciate the importance of the work may be turning down the next penicillin, PCR, or tricarboxylic acid cycle (27). Problems with short-sighted peer review are not limited to journals; an analysis at the National Institute of General Medical Sciences (NIGMS) has concluded that current grant review procedures have an error rate of approximately 30% and result in many meritorious applications going unfunded (28).

Emphasis on high impact means that many meritorious studies will not be funded or published.Asking scientists to conduct only high-impact research creates a strong bias that discourages high-risk research and reduces the likelihood of unexpected breakthrough discoveries. We suggest an analogy between scientists and foraging predators, which employ random Brownian movement when prey are abundant and use more complex yet still random Lévy flights when searching for sparser prey (29, 30). Scientists in search of new discoveries cannot hope to investigate the full scope of nature if they must limit their searches to areas that a consensus of other scientists judges to be important. As Vannevar Bush observed nearly 70 years ago, “Basic research is performed without thought of practical ends … Many of the most important discoveries have come as a result of experiments undertaken with very different purposes in mind … It is certain that important and highly useful discoveries will result from some fraction of the undertakings in basic science, but the results of any one particular investigation cannot be predicted with accuracy” (31).

Journal impact factor and individual article citation rate are poorly correlated.The majority of a journal’s impact factor is determined by a minority of its papers that are very highly cited (13). It is well known that the impact factor does not necessarily predict the citation prospects of other papers published in the same journal (12, 14, 18, 32), and the relationship between impact factor and citation rate may be weakening (33). Furthermore, when the impact of science is analyzed by other criteria, there is at best a very weak positive correlation with the impact factor of the journal in which the work was published, whereas journal rank by impact factor correlates inversely with the reliability of research findings (14, 34–36).

Citation rate is an imperfect indicator of scientific quality and importance.Citation rate is highly dependent on factors such as field size and name recognition (37, 38). Studies suggest that journal impact factor may correlate poorly with the true value of research to a field (39, 40). Review articles and descriptions of new methods tend to be disproportionately cited (41, 42). Moreover, an emphasis on citation rate as a measure of impact perversely discourages research in neglected fields that are deserving of greater study.

Delays in the communication of scientific findings.The disproportionate rewards associated with publication in high-impact journals create compelling incentives for investigators to have their work published in such journals. Authors typically submit their work to multiple journals in serial fashion hoping for acceptance by a journal with the highest possible impact factor. This effort can consume considerable time and resources since investigators often respond to reviewers by performing additional experiments in an effort to convince journals to accept their papers. The multiple submissions also consume reviewers’ and editors’ time and delay the public disclosure of scientific knowledge. Publication delay slows the pace of science and can directly affect society when the manuscript contains information important for drug and vaccine development, public health, or medical care. For an investigator, the time spent in identifying a high-impact journal can result in a loss in citations (4). These effects translate into major inefficiencies in the dissemination of scientific information.

Creation of perverse incentives.The pressure for publication in high-impact journals may contribute to reduced reliability of the scientific literature. Articles retracted due to data falsification or fabrication are disproportionately found among high-impact journals (36), and errors also appear to be more frequently encountered (34, 36). The most selective journals demand clean stories and immaculate data, which seldom match the reality of laboratory investigation, where experiments can produce messy results. Hence, some investigators may be tempted to cut corners or manipulate data in an effort to benefit from the disproportionate rewards associated with publishing in prestigious journals. Impact factor mania also creates great pressures on journals to raise their impact factors, which can lead to practices that range from gaming the system to outright editorial misconduct. Some journals have been reported to pressure authors to include more citations to their own journal in order to artificially increase the journal’s impact factor. Recently a scheme was uncovered in which three journals conspired to cite each other’s papers in a mutual effort to increase their impact factors, a conspiracy that was blamed in part by the use of journal impact factor by the Brazilian funding agency to judge the quality of articles (43).

PROPOSALS FOR REFORM

Scientists’ unhealthy obsession with impact and impact factors has been widely criticized (2, 5, 12–14, 16, 26, 41, 44–50). Yet many feel trapped into accepting the current value system when submitting their work for publication or judging the work of others. Two recent efforts to counter impact factor mania are noteworthy.

DORA.The American Society for Cellular Biology, in concert with journal editors, scientific institutions and prominent scientists, organized the San Francisco Declaration on Research Assessment (DORA) in December 2012 to combat the use of the journal impact factor to assess the work of individual scientists (51). Some prominent scientists in the biomedical research community have begun to speak out in support of DORA. Nobel Prize winner Harold Varmus is advocating the redesign of curriculum vitae to emphasize contributions instead of specific publication (52). Dr. Varmus has decried a “flawed values system” in science and laments the fact that “researchers feel they will win funding only if they publish in top journals” (52).

A boycott of high-impact journals.Nobelist and eLife Editor-in-Chief Randy Schekman has recently criticized the monopoly of what he calls “luxury journals” in an editorial published in the British newspaper The Guardian. Schekman has vowed not to publish hereafter in Nature, Cell, and Science, stating that the disproportionate rewards associated with publishing in those journals distorts science in a manner akin to the effects of large bonuses on the banking industry (53). Although such efforts are well-intentioned, we are skeptical that boycotting the impact factor or the “luxury journals” will be effective, because the economics of current science dictate that scientists who succeed in publishing in such journals will accrue disproportionate rewards. This will continue to be an irresistible attraction. Even if the journal impact factor were to disappear tomorrow, the prestige associated with certain journals would persist, and authors would continue to try to publish there. Most scientists do not actually know the impact factor of individual journals—only that publication in such journals is highly sought after and respected. For instance, it is not widely known that Science is actually only ranked 20th among all journals in impact factor, lower than journals such as CA—A Cancer Journal for Clinicians and Advanced Physics. Similarly, we fear that boycotts of specific prestigious journals may hurt the trainees of those laboratories by depriving them of high-visibility venues for their work. In lieu of Science, Nature, or Cell, Schekman has recommended that authors submit their best papers to eLife. However, the managing executive editor has described eLife as “a very selective journal,” further noting that “it’s likely that the rejection rate will be quite high” (54). As long as a critical mass of scientists continues to submit their best work to highly selective journals, impact factor mania, or its equivalent, is likely to persist.

WHAT SCIENTISTS CAN DO

Despite widespread recognition that the impact factor is being widely misused, the misuse continues and is likely to continue until the scientific community acts to reduce its value to those who benefit from its use. We suggest some specific measures to ameliorate the damage done by this superficial measure of scientific quality and importance.

Reform review criteria for funding and promotion.We strongly advocate the assessment of research quality by peers in an individual’s field rather than using simplistic measures based on the quantity of papers and prestige of the publication venue. Academics should resist attempts by institutions to use journal impact factor in promotion or tenure decisions. When evaluating the performance of scientists, a focus should be on contributions rather than publication venue. Academic administrators should be educated that impact factor is an inadequate measure of individual achievement (55) and should be informed of the DORA principles. Individual scientists may sign to support the DORA initiative at http://am.ascb.org/dora/ (56).

Consider the use of diverse metrics.If metrics are to be used for judging individual scientists and their projects, review committees and administrators should consider a diverse range of parameters. Although we do not suggest that the quality of scientists’ work can be reduced to a single number, we do note that alternative citation metrics, such as the H-index and eigenfactor, are probably better measures of scientific impact than the impact factor. Evaluations that employ diverse metrics are likely to provide more insight than assessments based on single criteria.

Increase interdisciplinary interactions.The specialization of science has made it increasingly difficult for scientists to understand research outside their own field (15). However, there is no substitute for actually reading and understanding a scientific article. This requires that scientists who are asked to judge the productivity of another scientist working in a different field must acquire a working familiarity with that field. Increasing opportunities for interaction between researchers from different fields in training programs and at seminars and meetings will help to improve the quality of research assessment as well as stimulate transdisciplinary or multidisciplinary research.

Encourage elite journals to become less exclusive.Letters of rejection often state that “We regret that we receive many more meritorious submissions than we can publish.” Why should not the elite journals expand to accommodate all meritorious articles? Artificial restrictions on journal size serve to perpetuate the current wasteful system that requires authors to cascade serial submissions from one journal to another.

Address current imbalances in research funding and the scientific workforce.As long as there is an unreasonable level of competition for grants and jobs, methods to compare scientists with one another are going to be utilized. Scientists must work with policymakers to alleviate the current shortages of research funding and job opportunities that have created the current crisis situation.

A return to essential scientific values.Ultimately, the only cure for impact factor mania must come from scientists themselves. If scientists fail to curb their current impact factor mania, they will pass onto their trainees a distorted value system that rewards the acquisition of publications in exclusive journals rather than the acquisition of knowledge and one that promotes an obsession with individual career success over service to society. Moreover, the current insistence on funding only high-impact projects is skewing the focus of research efforts and increasing the likelihood that important avenues of investigation will be overlooked. Scientists must return to essential scientific values that place an emphasis on research quality and reproducibility, the advancement of knowledge, and service to society over the accumulation of publications in prestigious journals.

CONCLUSIONS

The persistence of impact factor mania that at first seems irrational becomes more understandable in light of the diverse confluence of forces within the scientific enterprise that encourage, promote, and perpetuate it. The major factor underlying impact factor mania is the disproportionate benefit to those few scientists who succeed in placing their work in highly selective journals, and that knowledge forces most, if not all, scientists to participate. In keeping with the current winner-takes-all economics of science (57), impact factor mania benefits a few, creates many losers, and distorts the process of science, yet can be understood as rational behavior by individual scientists because of the large rewards accrued by those who succeed. In 1968, Garrett Hardin authored an essay where he used the phrase “tragedy of the commons” to describe in economic terms a situation in which individuals carry out behavior that is rational and in their self-interest but detrimental to the community (58). In this regard, impact factor mania exemplifies a tragedy of the commons in the midst of the scientific enterprise. Impact factor mania will continue until the scientific community makes a concerted effort to break this destructive behavior.

  • Copyright © 2014 Casadevall and Fang.

This is an open-access article distributed under the terms of the Creative Commons Attribution-Noncommercial-ShareAlike 3.0 Unported license, which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original author and source are credited.

REFERENCES

  1. 1.↵
    1. Garfield E
    . 2006. The history and meaning of the journal impact factor. JAMA 295:90–93. doi:10.1001/jama.295.1.90.
    OpenUrlCrossRefPubMedWeb of Science
  2. 2.↵
    1. Alberts B
    . 2013. Impact factor distortions. Science 340:787. doi:10.1126/science.1240319.
    OpenUrlAbstract/FREE Full Text
  3. 3.↵
    1. Editors of the American Heritage Dictionary
    . 2013. American Heritage dictionary of the English language. Houghton Mifflin Harcourt Publishers, Boston, MA.
  4. 4.↵
    1. Sekercioğlu CH
    . 2013. Citation opportunity cost of the high impact factor obsession. Curr. Biol. 23:R701–R702. doi:10.1016/j.cub.2013.07.065.
    OpenUrlCrossRefPubMed
  5. 5.↵
    1. Colquhoun D
    . 2003. Challenging the tyranny of impact factors. Nature 423:479. doi:10.1038/423479b.
    OpenUrlCrossRefPubMed
  6. 6.↵
    1. van Diest PJ,
    2. Holzel H,
    3. Burnett D,
    4. Crocker J
    . 2001. Impactitis: new cures for an old disease. J. Clin. Pathol. 54:817–819. doi:10.1136/jcp.54.11.817.
    OpenUrlFREE Full Text
  7. 7.↵
    1. Cyranoski D,
    2. Gilbert N,
    3. Ledford H,
    4. Nayar A,
    5. Yahia M
    . 2011. Education: the PhD factory. Nature 472:276–279. doi:10.1038/472276a.
    OpenUrlCrossRefPubMedWeb of Science
  8. 8.↵
    1. Stephan P
    . 2012. Research efficiency: perverse incentives. Nature 484:29–31. doi:10.1038/484029a.
    OpenUrlCrossRefPubMedWeb of Science
  9. 9.↵
    1. Alberts B
    . 2013. Am I wrong? Science 339:1252. doi:10.1126/science.1237434.
    OpenUrlAbstract/FREE Full Text
  10. 10.↵
    1. Iyengar R,
    2. Wang Y,
    3. Chow J,
    4. Charney DS
    . 2009. An integrated approach to evaluate faculty members’ research performance. Acad. Med. 84:1610–1616. doi:10.1097/ACM.0b013e3181bb2364.
    OpenUrlCrossRefPubMed
  11. 11.↵
    1. Casadevall A,
    2. Fang FC
    . 2009. Important science—it’s all about the SPIN. Infect. Immun. 77:4177–4180. doi:10.1128/IAI.00757-09.
    OpenUrlAbstract/FREE Full Text
  12. 12.↵
    1. Seglen PO
    . 1997. Why the impact factor of journals should not be used for evaluating research. BMJ 314:498–502. doi:10.1136/bmj.314.7079.498.
    OpenUrlFREE Full Text
  13. 13.↵
    1. Nature
    . 2005. Not-so-deep impact. Nature 435:1003–1004. doi:10.1038/4351003b.
    OpenUrlCrossRefPubMed
  14. 14.↵
    1. Brembs B,
    2. Button K,
    3. Munafo M
    . 2013. Deep impact: unintended consequences of journal rank. Front. Hum. Neurosci. 7:291. doi:10.3389/fnhum.2013.00291.
    OpenUrlCrossRefPubMed
  15. 15.↵
    1. Casadevall A,
    2. Fang FC
    . 13 January 2014. Specialized science. Infect. Immun. doi:10.1128/IAI.01530-13.
    OpenUrlAbstract/FREE Full Text
  16. 16.↵
    1. Young NS,
    2. Ioannidis JP,
    3. Al-Ubaydli O
    . 2008. Why current publication practices may distort science. PLoS Med. 5:e201. doi:10.1371/journal.pmed.0050201.
    OpenUrlCrossRefPubMed
  17. 17.↵
    1. Petsko GA
    . 2011. The one new journal we might actually need. Genome Biol. 12:129. doi:10.1186/gb-2011-12-9-129.
    OpenUrlCrossRefPubMed
  18. 18.↵
    1. Callaham M,
    2. Wears RL,
    3. Weber E
    . 2002. Journal prestige, publication bias, and other characteristics associated with citation of published studies in peer-reviewed journals. JAMA 287:2847–2850. doi:10.1001/jama.287.21.2847.
    OpenUrlCrossRefPubMedWeb of Science
  19. 19.↵
    1. Ioannidis JP
    . 2006. Concentration of the most-cited papers in the scientific literature: analysis of journal ecosystems. PLoS One 1:e5. doi:10.1371/journal.pone.0000005.
    OpenUrlCrossRefPubMed
  20. 20.↵
    1. Reich ES
    . 2013. Science publishing: the golden club. Nature 502:291–293. doi:10.1038/502291a.
    OpenUrlCrossRefPubMedWeb of Science
  21. 21.↵
    1. Serenko A,
    2. Cox R,
    3. Bontis BL
    . 2011. The superstar phenomenon in the knowledge management and intellectual capital academic discipline. J. Informetr. 5:333–345. doi:10.1016/j.joi.2011.01.005.
    OpenUrlCrossRef
  22. 22.↵
    1. Merton RK
    . 1968. The Matthew effect in science: the reward and communication systems of science are considered. Science 159:56–63. doi:10.1126/science.159.3810.56.
    OpenUrlAbstract/FREE Full Text
  23. 23.↵
    1. Ferreira RC,
    2. Antoneli F,
    3. Briones MR
    . 2013. The hidden factors in impact factors: a perspective from Brazilian science. Front. Genet. 4:130. doi:10.3389/fgene.2013.00130.
    OpenUrlCrossRefPubMed
  24. 24.↵
    1. Huggett S
    . 2012. Impact factors: cash puts publishing ethics at risk in China. Nature 490:342. doi:10.1038/490342c.
    OpenUrlCrossRefPubMed
  25. 25.↵
    1. Franzoni C,
    2. Scellato G,
    3. Stephan P
    . 2011. Science policy. Changing incentives to publish. Science 333:702–703. doi:10.1126/science.1197286.
    OpenUrlAbstract/FREE Full Text
  26. 26.↵
    1. Lawrence PA
    . 2007. The mismeasurement of science. Curr. Biol. 17:R583–R585. doi:10.1016/j.cub.2007.06.014.
    OpenUrlCrossRefPubMedWeb of Science
  27. 27.↵
    1. Nature
    . 2003. Coping with peer rejection. Nature 425:645. doi:10.1038/425645a.
    OpenUrlCrossRefPubMed
  28. 28.↵
    1. Berg J
    . 2013. On deck chairs and lifeboats. ASBMB Today 12:3–6.
    OpenUrl
  29. 29.↵
    1. Mandelbrot B
    . 1982. The fractal geometry of nature. W. H. Freeman, New York, NY.
  30. 30.↵
    1. Humphries NE,
    2. Queiroz N,
    3. Dyer JR,
    4. Pade NG,
    5. Musyl MK,
    6. Schaefer KM,
    7. Fuller DW,
    8. Brunnschweiler JM,
    9. Doyle TK,
    10. Houghton JD,
    11. Hays GC,
    12. Jones CS,
    13. Noble LR,
    14. Wearmouth VJ,
    15. Southall EJ,
    16. Sims DW
    . 2010. Environmental context explains Lévy and Brownian movement patterns of marine predators. Nature 465:1066–1069. doi:10.1038/nature09116.
    OpenUrlCrossRefPubMedWeb of Science
  31. 31.↵
    1. Bush V
    . 1945. Science the endless frontier. US Government Printing Office, Washington, DC.
  32. 32.↵
    1. Rostami-Hodjegan A,
    2. Tucker GT
    . 2001. Journal impact factors: a ‘bioequivalence’ issue? Br. J. Clin. Pharmacol. 51:111–117. doi:10.1111/j.1365-2125.2001.01349.x.
    OpenUrlCrossRefPubMedWeb of Science
  33. 33.↵
    1. Lozano G,
    2. Larivière V,
    3. Gingras Y
    . 2012. The weakening relationship between the impact factor and papers’ citations in the digital age. J. Am. Soc. Inf. Sci. Technol. 63:2140–2145. doi:10.1002/asi.22731.
    OpenUrlCrossRef
  34. 34.↵
    1. Brown EN,
    2. Ramaswamy S
    . 2007. Quality of protein crystal structures. Acta Crystallogr. D Biol. Crystallogr. 63:941–950. doi:10.1107/S0907444907033847.
    OpenUrlCrossRefPubMedWeb of Science
  35. 35.↵
    1. Fang FC,
    2. Casadevall A
    . 2011. Retracted science and the retraction index. Infect. Immun. 79:3855–3859. doi:10.1128/IAI.05661-11.
    OpenUrlAbstract/FREE Full Text
  36. 36.↵
    1. Fang FC,
    2. Steen RG,
    3. Casadevall A
    . 2012. Misconduct accounts for the majority of retracted scientific publications. Proc. Natl. Acad. Sci. U. S. A. 109:17028–17033. doi:10.1073/pnas.1212247109.
    OpenUrlAbstract/FREE Full Text
  37. 37.↵
    1. Falagas ME,
    2. Charitidou E,
    3. Alexiou VG
    . 2008. Article and journal impact factor in various scientific fields. Am. J. Med. Sci. 335:188–191. doi:10.1097/MAJ.0b013e318145abb9.
    OpenUrlCrossRefPubMed
  38. 38.↵
    1. Hernán MA
    . 2008. Epidemiologists (of all people) should question journal impact factors. Epidemiology 19:366–368. doi:10.1097/EDE.0b013e31816a9e28.
    OpenUrlCrossRefPubMedWeb of Science
  39. 39.↵
    1. Sutherland WJ,
    2. Goulson D,
    3. Potts SG,
    4. Dicks LV
    . 2011. Quantifying the impact and relevance of scientific research. PLoS One 6:e27537. doi:10.1371/journal.pone.0027537.
    OpenUrlCrossRefPubMed
  40. 40.↵
    1. Eyre-Walker A,
    2. Stoletzki N
    . 2013. The assessment of science: the relative merits of post-publication review, the impact factor, and the number of citations. PLoS Biol. 11:e1001675. doi:10.1371/journal.pbio.1001675.
    OpenUrlCrossRefPubMed
  41. 41.↵
    1. Hecht F,
    2. Hecht BK,
    3. Sandberg AA
    . 1998. The journal “impact factor”: a misnamed, misleading, misused measure. Cancer Genet. Cytogenet. 104:77–81.
    OpenUrlCrossRefPubMedWeb of Science
  42. 42.↵
    1. Bernstein J,
    2. Gray CF
    . 2012. Content factor: a measure of a journal’s contribution to knowledge. PLoS One 7:e41554. doi:10.1371/journal.pone.0041554.
    OpenUrlCrossRefPubMed
  43. 43.↵
    1. Van Noorden R
    . 2013. Brazilian citation scheme outed. Nature 500:510–511. doi:10.1038/500510a.
    OpenUrlCrossRefPubMedWeb of Science
  44. 44.↵
    1. Walter G,
    2. Bloch S,
    3. Hunt G,
    4. Fisher K
    . 2003. Counting on citations: a flawed way to measure quality. Med. J. Aust. 178:280–281.
    OpenUrlPubMedWeb of Science
  45. 45.↵
    1. PLoS Med
    . 2006. The impact factor game. It is time to find a better way to assess the scientific literature. PLoS Med. 3:e291. doi:10.1371/journal.pmed.0030291.
    OpenUrlCrossRefPubMed
  46. 46.↵
    1. Rossner M,
    2. Van Epps H,
    3. Hill E
    . 2007. Show me the data. J. Exp. Med. 204:3052–3053. doi:10.1084/jem.20072544.
    OpenUrlFREE Full Text
  47. 47.↵
    1. Simons K
    . 2008. The misused impact factor. Science 322:165. doi:10.1126/science.1165316.
    OpenUrlAbstract/FREE Full Text
  48. 48.↵
    1. Szklo M
    . 2008. Impact factor: good reasons for concern. Epidemiology 19:369. doi:10.1097/EDE.0b013e31816b6a7a.
    OpenUrlCrossRefPubMedWeb of Science
  49. 49.↵
    1. Smith R
    . 2008. Beware the tyranny of impact factors. J. Bone Joint Surg. Br. 90:125–126. doi:10.1302/0301-620X.90B2.20258.
    OpenUrlCrossRefPubMed
  50. 50.↵
    1. Nat,
    2. Mater
    . 2013. Beware the impact factor. Nat. Mater. 12:89. doi:10.1038/nmat3566.
    OpenUrlCrossRefPubMed
  51. 51.↵
    1. Schekman R,
    2. Patterson M
    . 2013. Reforming research assessment. Elife 2:e00855. doi:10.7554/eLife.00855.
    OpenUrlCrossRefPubMed
  52. 52.↵
    1. Kaiser J
    . 2013. Varmus’s second act. Science 342:416–419. doi:10.1126/science.342.6157.416.
    OpenUrlAbstract/FREE Full Text
  53. 53.↵
    1. Schekman R
    . 2013. How journals like Nature, Cell and Science are damaging science. The Guardian, London, United Kingdom.
  54. 54.↵
    1. Patterson M
    . 2012. Life—an author’s new best friend. http://www.elifesciences.org/elife-an-authors-new-best-friend/.
  55. 55.↵
    1. Russell R,
    2. Singh D
    . 2009. Impact factor and its role in academic promotion. Int. J. Chron. Obstruct. Pulmon. Dis. 4:265–266. doi:10.2147/COPD.S6533.
    OpenUrlCrossRefPubMed
  56. 56.↵
    1. Misteli T
    . 2013. Eliminating the impact of the impact factor. J. Cell Biol. 201:651–652. doi:10.1083/jcb.201304162.
    OpenUrlFREE Full Text
  57. 57.↵
    1. Casadevall A,
    2. Fang FC
    . 2012. Winner takes all. Sci. Am. 307:13. doi:10.1038/scientificamerican0812-13.
    OpenUrlCrossRefPubMedWeb of Science
  58. 58.↵
    1. Hardin G
    . 1968. The tragedy of the commons. The population problem has no technical solution; it requires a fundamental extension in morality. Science 162:1243–1248.
    OpenUrlCrossRefPubMedWeb of Science
View Abstract
PreviousNext
Back to top
Download PDF
Citation Tools
Causes for the Persistence of Impact Factor Mania
Arturo Casadevall, Ferric C. Fang
mBio Mar 2014, 5 (2) e00064-14; DOI: 10.1128/mBio.00064-14

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Print

Alerts
Sign In to Email Alerts with your Email Address
Email

Thank you for sharing this mBio article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Causes for the Persistence of Impact Factor Mania
(Your Name) has forwarded a page to you from mBio
(Your Name) thought you would be interested in this article in mBio.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Causes for the Persistence of Impact Factor Mania
Arturo Casadevall, Ferric C. Fang
mBio Mar 2014, 5 (2) e00064-14; DOI: 10.1128/mBio.00064-14
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Top
  • Article
    • ABSTRACT
    • INTRODUCTION
    • CAUSES OF IMPACT FACTOR MANIA
    • PROBLEMS WITH IMPACT FACTOR MANIA
    • PROPOSALS FOR REFORM
    • WHAT SCIENTISTS CAN DO
    • CONCLUSIONS
    • REFERENCES
  • Info & Metrics
  • PDF

Related Articles

Cited By...

About

  • About mBio
  • Editor in Chief
  • Board of Editors
  • AAM Fellows
  • Policies
  • For Reviewers
  • For the Media
  • For Librarians
  • For Advertisers
  • Alerts
  • RSS
  • FAQ
  • Permissions
  • Journal Announcements

Authors

  • ASM Author Center
  • Submit a Manuscript
  • Author Warranty
  • Article Types
  • Ethics
  • Contact Us

Follow #mBio

@ASMicrobiology

       

ASM Journals

ASM journals are the most prominent publications in the field, delivering up-to-date and authoritative coverage of both basic and clinical microbiology.

About ASM | Contact Us | Press Room

 

ASM is a member of

Scientific Society Publisher Alliance

 

American Society for Microbiology
1752 N St. NW
Washington, DC 20036
Phone: (202) 737-3600

Copyright © 2021 American Society for Microbiology | Privacy Policy | Website feedback

Online ISSN: 2150-7511