Retraction policies of top scientific journals ranked by impact factor.

OBJECTIVE
This study gathered information about the retraction policies of the top 200 scientific journals, ranked by impact factor.


METHODS
Editors of the top 200 science journals for the year 2012 were contacted by email.


RESULTS
One hundred forty-seven journals (74%) responded to a request for information. Of these, 95 (65%) had a retraction policy. Of journals with a retraction policy, 94% had a policy that allows the editors to retract articles without authors' consent.


CONCLUSIONS
The majority of journals in this sample had a retraction policy, and almost all of them would retract an article without the authors' permission.


INTRODUCTION
Scholarly journals sometimes retract articles because they are subsequently found to have serious flaws that undermine the reliability of the data or results, or because they involve research misconduct, such as fabrication, falsification, or plagiarism [1,2]. Several recent studies indicate there has been a significant increase in the number of papers retracted from the scientific literature in the last decade [3][4][5]. The majority of these papers have been retracted because of misconduct [1][2][3]. It is not known whether the increasing number of retractions is due to an overall increase in research misconduct, but evidence suggests that the general trend is related to a growing awareness among editors and publishers of the importance of retracting papers that are fraudulent or have major flaws [6][7][8]. A key study of retractions found that a journal's retraction rate and its retractions for fraud are positively associated with the journal's impact factor [3,9]. A possible explanation for this association is that authors are more likely to risk getting caught for breaking ethical rules in order to obtain the career rewards from publication in highimpact journals [9]. Another possible explanation is that papers published in high-impact journals are more likely to draw scrutiny that leads to the discovery of fraudulent research that requires retraction [9].
Retracting scientific papers can pose ethical and legal challenges for journal editors and publishers [1,2]. The easiest cases occur when the authors all agree that the paper should be retracted due to serious error or misconduct. In harder cases, the authors do not all agree that a paper should be retracted. For example, one author may oppose retraction, believing that a serious flaw identified in the paper affects only a part of the research and not the entire study. If the authors do not all agree on the decision to retract a paper, the editor must decide whether to retract a paper without consent of all of the authors. If a paper is retracted without the consent of all authors, a journal may face the threat of litigation from dissenting authors. When editors receive allegations of research misconduct related to a published paper, they must decide how to deal with these accusations and whether to retract a paper suspected of being affected by misconduct. Editors may decide not to retract a paper until misconduct is confirmed by the author's institution, although they may publish an expression of concern while an investigation is ongoing or in the absence of a proper investigation.
The Committee on Publication Ethics (COPE) was established in 1997 to provide a forum for editors and publishers to discuss issues related to publication ethics. COPE now has over 9,000 members [10]. In 2009, COPE published guidelines for retracting articles. The guidelines distinguish between retractions, corrections, and expressions of concern; discuss the reasons for retracting an article; and describe procedures for retracting articles, including a format for retraction notices. The guidelines state that editors may retract an article or publish an expression of concern without the consent of all the authors [11].
A study published in 2004 provided some data on the retraction policies of top biomedical journals [12]. The author examined the instructions for authors for 122 top biomedical journals to determine whether they included information about journal retraction policies. If the instructions for authors did not include information about retraction policies, the editors were contacted by email or mail to obtain information about retraction policies. Only 21% of the studied journals had a retraction policy. While this study provided some useful information about journal retraction policies, it had several limitations. First, the study is 10 years old, and journals may have adopted retraction policies in the interim, especially since retractions have become a hot topic in scientific publishing and the COPE guidelines were not published until 2009. Second, the study did not analyze the content of retraction policies. Third, the study was limited to biomedical journals and did not include journals in the physical sciences, engineering, or social and behavioral sciences.
The purpose of this study was to provide updated information on the retraction policies of major science journals. The specific aims were to: (1) determine the percentage of the top 200 science journals ranked by impact factor that have a retraction policy; (2) analyze the content of journal retraction policies; and (3) ascertain whether having a retraction policy is associated with impact factor, scientific discipline, or status as a review journal.

METHODS
The authors emailed the editors of the top 200 science journals ranked by Journal Citation Reports impact factor for the year 2012 [13]. Impact factor is a measure of the average number of times that an article published in a journal is cited during a 2-year period [14]. In the email, we asked the editors to provide us with information about their retraction policies (if any). We sent a reminder email if we did not receive a response. We developed a system for coding the responses based on how well journal policies complied with the COPE guidelines. Responses were coded according to the following questions: 1. Does the journal have a retraction policy? 2. What is the source of the policy (e.g., journal, publisher, COPE, etc.)? 3. Does the policy allow the editor to retract articles without the consent of all of the authors? 4. Does the policy allow the editor to publish an expression of concern without the consent of all the authors? 5. Does the policy require retraction notices published in the journal to state the reason for the retraction, such as misconduct, error, and so on? 6. Does the policy include procedures for retracting articles, such as linking the retraction to the original article in databases, marking the original article as retracted, and so on?
We considered a journal to have a retraction policy if the respondents provided us with a copy of a retraction policy, referred us to a retraction policy on the journal's or publisher's website, or said that they followed retraction guidelines provided by other organizations, such as COPE or the International Committee of Medical Journal Editors (ICMJE). We also considered a journal as having a retraction policy if the respondents referred us to another document, such as a policy on corrections or research misconduct that also addressed retractions. We did not consider a journal as having a retraction policy if the respondents said they did not have a policy or they said they handled retractions on a case-by-case basis. We also did not consider a journal to have a retraction policy if the respondents mentioned a source of potential guidance that was usually followed (such as COPE or ICMJE) but stated that they handled retractions on a case-by-case basis. If a journal followed guidelines provided by the publisher or another organization (such as COPE), we used those guidelines to code their response on the characteristics of the policy. If a publisher responded for a family of journals, we used the group response as the response for each of those journals for our aggregate data but not for our statistical analysis (see discussion below). We did not consider a journal to allow retractions or expressions of concern without the consent of all authors unless the journal's policy specifically stated that the journal could do this or implied that it could.
Two of us (Resnik and Wager) independently coded the email responses and then resolved disagreements. We also collected information on the journal's impact factor, its publication of review articles only, and the type of science it published (physical science and engineering, biomedical sciences, social and behavioral sciences, or science from various disciplines, i.e., general research). The National Institutes of Health Office of Human Subjects Research Protections determined that the federal regulations did not apply to our study because we were not collecting private information about individuals.
Because some publishers responded for a family of journals, to preserve independence of the data in our statistical analyses, we treated each family of journals as a single response rather than as multiple responses from different journals. For a journal family's impact factor, we averaged impact factors of all of its journals. Furthermore, when the family contained both review and non-review journals, we analyzed the data two ways: first, by categorizing the family as ''review'' if at least one journal was a review journal and second, by categorizing the family as ''review'' only if all journals in the family were review journals.
We used chi-square and Fisher's exact tests to compare journals having a policy with those not having a policy with respect to whether they were a review journal and with respect to the distribution of disciplines. To accommodate families of journals, scientific disciplines were treated as four indicator variables: presence or absence of each of biomedical research, physical science or engineering research, social or behavioral sciences research, or other discipline. Because impact factor was not normally distributed, we used a Mann-Whitney test to compare impact factor between journals with a policy and those without. We used a chi-square test to determine whether there was an association between impact factor quartile and policy status. To determine whether there was a possible response bias, we used Mann-Whitney tests to compare responding and nonresponding journals with regard to impact factor, and chi-square tests or Fisher's exact test to compare responding and nonresponding journals with regard to review journal status and discipline of the journal. All P-values were 2-sided and considered statistically significant if less than 0.05.

RESULTS
Of the 200 journals contacted, 147 (74%) responded to our request for information, 45 (23%) did not respond, and 8 (4%) declined to provide information. The mean impact factor for responding journals was 18.1 (SD 14.2). Seventy-one (48%) were review journals, and 76 (52%) were not. One hundred six journals (72%) published biomedical research, 29 (20%) published physical science or engineering research, 7 (5%) published social and behavioral research, and 5 (3%) published research from various disciplines. Of the 147 journals in our sample, 95 (65%) had a retraction policy and 52 (35%) did not. (A list of responding journals is available on request.) Scientific discipline was the only variable that we examined that was significantly associated with having or not having retraction policy. When we treated a family of journals as a single biomedical journal if one of the journals in the family was biomedical, 23 of 38 (60.5%) had a retraction policy compared to 5 of 17 (29.4%) non-biomedical journal families (chi-square P-value50.033).
Among the 95 journals with a retraction policy, 89 (94%) had a policy that allowed the editors to retract articles without consent of all the authors, 50 (53%) had a policy that allowed the editors to publish an expression of concern without consent, 48 (51%) had a policy that required retraction notices to state the reason for the retraction, and 86 (91%) had a policy that described retraction procedures. Forty-nine (52%) retraction policies came from the publisher, 29 (31%) came jointly from the publisher and COPE, 6 (6%) came only from COPE, 5 (5%) came jointly from COPE and ICMJE, 4 (4%) came from the journal, 1 (1%) came jointly from the journal and COPE, and 1 (1%) came jointly from the journal and the National Library of Medicine (NLM).
There was no statistically significant response bias with regard to impact factor or other journal characteristics (i.e., review versus non-review journal or scientific discipline).

DISCUSSION
The most important finding of our study is that 3 times as many journals in our sample (65%) had a retraction policy compared to an earlier study in a similar group of journals (21%) [12]. A plausible explanation for this apparent increase in the percentage of journals with a retraction policy is that more editors and publishers have become aware of the importance of dealing with retractions, and they have developed policies or adopted ones provided by COPE or other organizations. This increase in retraction policies might also provide further support for the hypothesis that retractions have increased in the last decade because more journals have adopted retraction policies [8]. However, both of these claims are speculative, and further research, such as interviews with editors, is needed to understand the factors that influence the retraction rate and retraction policy development.
Several editors of review journals that did not have retraction policies indicated in their responses that they saw no need for a retraction policy because they only publish review articles and are therefore not faced with the issues related to the publication of original data, such as fabrication or falsification. While it is probably the case that review journals rarely encounter problems with articles that warrant retraction, they still might need to occasionally retract articles in which authors have plagiarized other publications. We recommend that journals that do not have a retraction policy consider developing or adopting one.
Another important finding of our study is that most of the journals with retraction policies in our sample will retract articles (94%) or publish expressions of concern (53%) without the consent of all the authors. As noted above, editors and publishers may encounter difficult issues if not all of the authors agree to retract an article [1]. Adopting a policy that allows the journal to retract articles or publish expressions of concern without the consent of all the authors may help editors to deal with such situations in a manner that protects readers, preserves the integrity of the publication record, and preserves the reputation of the journal.
It is also worth noting that COPE's guidelines (published in 2009) appear to have had a significant influence on journal retraction policies. COPE was a source of guidance for almost half (42%) of the policies in our sample (31% COPE and publisher, 6% COPE only, 5% COPE and ICMJE). Additionally, several of the journals that did not have a policy said they consulted COPE's guidelines, even though they handled retractions on a case-by-case basis.
We also found that a higher proportion of biomedical than non-biomedical journals in our sample had a retraction policy. A plausible explanation for this difference is that biomedical journals have had to deal with more issues related to retractions than nonbiomedical journals, so more of them have developed policies. However, this hypothesis is speculative, and more research is needed on why journals have or have not adopted retracted policies.
Our study has several potential sources of bias. First, the sample included a high percentage of review journals (48%). Review journals tend to have higher impact factors than other journals because review articles are read and cited more frequently than other types of article. Therefore, review journals are overrepresented among those with the highest impact factors [15]. Another potential bias is that our study focused on high-impact journals, so the results might not generalize to low-impact journals. To address these potential biases, it would be useful to conduct another study of journal retraction policies that includes more non-review journals and journals with lower impact factors.