Transparency and Data Sharing Blog

July 20, 2018

Future Perspectives in Peer Review

The Paradoxical Evolution of Peer Review

Richard Smith

Author Bio

Paradoxically, peer review, which is at the heart of science, is largely a faith‑based rather than evidence‑based activity. Peer review had been studied hardly at all until the late 1980s when the International Congress on Peer Review and Scientific Publication, a conference held every 4 years, began. Research shows that peer review is slow, expensive, somewhat of a lottery, poor at detecting errors, prone to bias, and easily abused. Highly original science is often rejected by peer review, leading some to consider the process as anti-innovatory and conservative.

Extensive examination of the existing research in the form of 2 systematic Cochrane reviews1,2 did not find peer review to be effective, although we must remember that absence of evidence of effectiveness is not the same as evidence of ineffectiveness. Despite the evidence and challenges, peer review of manuscripts continues on a massive scale, consuming a large amount of resources.

Traditional peer review is single-blind in that the authors do not know the identity of the reviewers but the reviewers know the identity of the authors. This model probably still accounts for most peer review, although the number and nature of the reviewers varies considerably. There is also considerable variation in how much the reviewers contribute to the final decision on publication.

In double-blind peer review, the reviewers are also blinded to the identity of authors. Large-scale trials of this double-blind of peer review failed to show the benefit of this practice and were followed by steadily opening up the peer-review process.

Some methods of open peer review allow authors to see the reviewers’ names and comments, while other methods publish all the comments along with the paper. The most open of peer-review models conducts the whole process online, in full view of the readership.

considerations

Perhaps the most cutting-edge innovation in peer review is open peer review that takes place after publication. This model of peer review is shown by F1000Research and accompanying “journals” like Wellcome Open Research and Gates Open Research. After a preliminary check of the studies, including ethics committee approval and confirming compliance with standard publication guidelines, the papers are published online together with all the raw data. Peer review takes place after this online publication. The authors select reviewers from a database or may suggest their own reviewers based on established criteria—such as being academics of certain standing (ie, full professor). Reviewers are asked to comment not on the originality or importance of the study, but simply on its methodologic soundness and whether the conclusions are supported by the methods and data. Once the reviews are posted, authors and readers can see both the reviews of the published paper and the identities of the reviewers. After publication, anybody can post comments at any time. Authors have the opportunity to respond to reviewers’ comments or point out where reviewers have misunderstood the study. Peer review becomes less of a one-time judgement and more of an ongoing dialogue aimed at ensuring the best possible paper is published.


considerations

While there are many innovations in how the peer review is conducted, sadly, there doesn’t appear to have been a rigorous evaluation of the effectiveness of these innovations. Nevertheless, peer review is evolving: the process is opening up, full data sets are being included, and authors are expected to declare their contributions and conflicts of interest. Generally, there is more interest in the science of science publishing.


References

  1. Jefferson T, Alderson P, Wager E, Davidoff F. Effects of editorial peer review: a systematic review. JAMA. 2002;287(21):2784-2786. doi:10.1001/jama.287.21.2784.

  2. Wager E, Middleton P. Effects of technical editing in biomedical journals: a systematic review. JAMA. 2002;287(21):2821-2824. doi:10.1001/jama.287.21.2821.


Supplemental Reading List

Smith R. Classical peer review: an empty gun. Breast Cancer Res. 2010;12(suppl 4):S13. doi: 10.1186/bcr2742.

Spicer A, Roulet T. Explainer: what is peer review? http://theconversation.com/explainer-what-is-peer-review-27797. Published June 18, 2014. Accessed July 16, 2018.



May 9, 2018

Perspectives on Predatory Journals and Congresses

Open Access or Public Trust: A False Dichotomy

Catriona MacCallum

Author Bio

Although the term ‘predatory publishing’ wasn’t coined until Jeffrey Beall notoriously introduced it in 2011, the notion that open access (OA) was undercutting or even corrupting the quality of scholarly publishing goes back to at least 2007 with a campaign led by Eric Dezenhall on behalf of some large commercial subscription publishers. The result was the ‘Partnership for Research Integrity in Science and Medicine’—PRISM—which set out to question the quality of articles being made freely available and to push back on the first stirrings of public and OA mandates by funders. At the time, I had been a professional editor for 9 years, in sole charge of an Elsevier journal before moving to PLOS to help launch and run some of their OA journals. Despite changing business models, there was no difference in the rigour with which I conducted peer review. Fortunately PRISM backfired and sank without trace, but the myth that OA somehow meant predatory, and either low quality or no peer review, has stubbornly and erroneously persisted.

This is not to say that fraudulent practices do not exist—they do and should be taken seriously—but they are not unique to OA journals and they should not be seen as a product of the model. The vast majority of OA journals and articles are rigorously peer reviewed and hosted by publishers and editors who genuinely care about the integrity of science. As pointed out by Angela Sykes in a previous post in this series, the Directory of Open Access Journals (DOAJ) has stringent criteria for a journal to be eligible on their list. There is also a set of principles, created through a partnership of the Committee on Publication Ethics (COPE), the Open Access Scholarly Publishers Association (OASPA), the DOAJ, and the World Association of Medical Editors (WAME), that publishers wishing to join these organisations must adhere to.

For several years I was chair of the OASPA membership committee. Many of the applications came from independent scholars from all over the world trying to set up reputable journals in their field. What they lacked, as researchers, was training on how to go about it—an understanding of what running peer review actually entailed, the checks required before peer review can even start, and the licensing, copyright, infrastructure, and metadata required to make content discoverable (the need for DOIs, for example). It was incredibly rare to receive an application that was not genuine. Much of the work of the committee was, and remains, about education.

All publishers have a responsibility to provide a high-quality rigorous service for their authors, editors, reviewers, and readers. There is no doubt that there are problems and corrupt practice in journal publishing, and researchers need to be confident in the credibility of a journal and publisher before they submit. We need, however, to get away from the idea that there is a dichotomy between good and evil in scholarly publishing, that there is either predatory or good practice—and that OA equates with the former. Indeed, a study by Olivarez et al. this year showed that when three researchers tried to independently apply Beall’s criteria to a number of journals, they found that not only could they not identify the same journals as each other, but that their lists included many traditional ‘top-tier’ journals.

In our digital age, the meaning of quality is changing. OA and the increasing focus on transparency, collaboration, trust, and discoverability in open science means that journals, publishers, and indeed all stakeholders involved in scholarly communication need to up their game. Peer review itself is no guarantee of quality, as the increasing number of retractions because of fraud can attest to, including from very high-profile, reputable traditional journals.

The greatest challenge is not ‘predatory’ publishers, it is the culture in which research currently operates and the focus on the journal as a proxy of researcher quality. What quality means depends on context and who is doing the evaluating. Moreover, we need to value and evaluate all the varied contributions that researchers provide to science (including their contributions to peer review) and to the wider knowledge system. Publishers, institutions, and other stakeholders can sign DORA as a commitment to this.

We also need to put safeguards and training in place for researchers where bad practice exists, such as the recently launched ‘Think, Check, Submit’ campaign, which aims to raise awareness among authors about what to look for in a journal before entrusting it with their work. But this applies to any journal. It is time to get away from the polarised, simplified dichotomy between predatory and good research, or subscription and OA publishing, and focus on what really counts (see Figure).

Most importantly, is it possible for others, including machines, to reuse the information so that my work can potentially contribute to the public good for the benefit of science and society?

considerations

Acknowledgement: I would like to thank Helen Dodson, whose presentation at UKSG 2018 pointed me to several references and helped inform this post.



April 12, 2018

Perspectives on Predatory Journals and Congresses, Editor Interview

Insights on Predatory Journals from Patricia “Patty” Baskin

Patricia “Patty” Baskin

Author Bio

To complement the blog series on predatory journals and congresses, MPIP caught up with Patty Baskin to gain her expert insights on threats posed by predatory journals and what authors and publication professionals can do to protect their work. Patty Baskin is the Executive Editor at the American Academy of Neurology and immediate past president of the Council of Science Editors.

Visit the MPIP YouTube Channel to view the video of the interview today!



February 28, 2018

Perspectives on Predatory Journals and Congresses, Part 2: The Biopharmaceutical Industry Perspective

Addressing the Growing Problem of Predatory Journals and Publishers

Angela Sykes, MA, MPhil

Author Bio

In 2008, Jeffrey Beall, an academic librarian, noticed that he was receiving spam email solicitations from broad-scoped, newly formed open access (OA) journals whose main intent seemed to be the collection of article processing/publication charges. He later coined the term “predatory” to describe these journals and publishers with dubious practices. Since then other terms have been used, such as “questionable,” “deceptive,” and “pseudo.” However, regardless of what term is used, the practices of these journals and publishers are strikingly similar. (See graphic)

practices

Predatory publishers are a threat to scientific integrity as they make it difficult to demarcate sound science from fake science that can be damaging to public health.

In 2016 the Federal Trade Commission (FTC) issued a complaint against OMICS, a publisher of hundreds of online journals, accusing them of deceptive practices including publishing articles with little to no peer-review and listing prominent academics as editorial board members without their consent. Then, in 2017, an article published in Bloomberg Businessweek reported that several prominent pharmaceutical companies, including AstraZeneca, Bristol-Myers Squibb, Gilead, Merck, Novartis, Eli Lilly, GlaxoSmithKline, and Pfizer, had published in OMICS journals or participated in OMICS congresses. The article questioned “whether drugmakers are purposely ignoring what they know of OMICS’s reputation or are genuinely confused amid the profusion of noncredible journals.”

This article brought the problem of predatory journals and publishers to the forefront at Pfizer and increased our awareness of this growing problem. Since then we have taken steps to address this issue by increasing colleagues’ awareness of predatory journals, publishers, and congresses, and making changes to our publications software application such that it is easier to identify OMICS journals before they are imported into the system. We are also researching subscription-based tools such as the Journals & Congresses database, which has stringent criteria for inclusion.

But what else can be done to reduce the amount of research published by predatory journals, not only by pharmaceutical companies but by all parties involved in the development of medical/scientific publications? (See graphic)

practices

Some organizations have already taken action. In 2014, the Directory of Open Access Journals (DOAJ) developed more stringent criteria for indexing and required journals to reapply for inclusion and, in November 2017, the National Institutes of Health (NIH) issued a statement requesting that NIH-funded research be submitted to reputable journals to help protect the credibility of papers arising from its research investment.

Please note that the content of this blog post was prepared by me, Angela Sykes, in my personal capacity. The opinions expressed in this presentation are my own and do not reflect the views of Pfizer.



January 25, 2018

Perspectives on Predatory Journals and Congresses, Part 1: The Agency Perspective

Eight Tips for Vetting Journal and Congress Websites

Ray Hunziker

Senior Editor, Healthcare Consultancy Group

Author Bio

Most of us are aware of “predatory” journals and congresses that promote themselves as legitimate but practice deceptive or exploitative practices at the expense of unsuspecting authors. Many fail to deliver even the basic editorial or publishing services provided by reputable journals or the networking value provided by legitimate congresses. Beyond the implications for authors in terms of cost, credibility, and reputation of being associated with these predatory journals/congresses, they also rob the scientific community of timely access to research. So what steps can we take to protect ourselves from their predatory practices?

To provide you with a framework for making your own assessments, below are 8 criteria I use to evaluate the probable legitimacy of journals or congresses based on their websites. Try not to put too much emphasis on a single item, but if a journal/congress website falls short on more than one, you may be dealing with a predator.


1. Professional website design

Less is more! Legitimate journal/congress websites typically have a “clean” look. Conversely, predators often have a “busy” look and inconsistent font choices, poor text and photo alignment, bad line breaks, and a plethora of animations. In short, they look amateurish. Scrolling text banners at the top of the page are especially suspicious. Beware if contact information uses free email providers such as Hotmail or Gmail.

2. Proper use of English

Although small international journals/congresses can be perfectly legitimate and still have a few English-as-a-second-language grammar and usage errors, a large number of typos and grammatical errors is a clear red flag. Also, look for sections that are riddled with errors alternating with error-free sections; this may be a clue that some content was plagiarized.

3. Proportional and appropriate photos

Designers of many predatory websites seem unable to reduce photo sizes proportionally, leading to “squashed” images. Also, be suspicious of a small journal/congress with a large number of alleged editorial board or faculty members, or “rock star” names in the field. These people probably are unaware they are on the website, and their photos may have been captured from LinkedIn, Facebook, etc. Finally, are photos from a past meeting cropped tightly, in such a way that you can’t tell if there were 20 attendees or 200?

4. Functional hyperlinks

Logos of sponsoring or partnering organizations should be functional hyperlinks. Predatory websites often present a dizzying collage of nonfunctional logo images, some wildly inappropriate for the subject matter of the journal/congress.

5. Complete and logical content

Look out for that little construction worker! Legitimate websites should have complete content. For example, a journal editorial board that is “Coming Soon” or an impending congress with venue information still “Under Construction” is likely a predator. Additionally, journals or congresses with an extremely broad focus, perhaps combining medical and physical sciences, are likely predators.

6. Flexibility in travel and accommodations

Can you make your own travel arrangements if you prefer? Required use of preferred travel and accommodations vendors is a red flag for congresses. Such companies are usually owned by the sponsoring organization. Look for clear statements of all registration costs and refund policies. Look for a secure mechanism for credit/debit card payments (lock icon in the URL), and avoid sites that merely provide bank transfer information.

7. Location, location, location

A very devious tactic predatory journals and congresses often use is to craft a name very similar to that of a legitimate counterpart, or to “clone” the content of a legitimate website, or to associate their predatory journal with a legitimate one on the same site. Often they just add a word, such as “American” or “International,” to a legitimate journal/congress name. Unmasking these frauds can be very challenging.

Predators will often spoof a location in the United States or Europe, but there are web tools that can trace the true location of the web domain housing a website. You can also use Google Earth to check the street address and make sure it is not simply a house or a storefront. Be cautious if the address given is in Delaware, as incorporation laws make it easy for an overseas predator to register a business there.

8. Networking

Finally, network with your senior colleagues. Has anyone ever heard of this journal or congress? Have they published in the journal or attended a meeting? Were their experiences positive?


A little due diligence upfront will pay off in the long run to ensure you get the most from your publication and congress experiences. Although the tips I’ve provided here may give you some objective evidence regarding the legitimacy of a journal/meeting, your instincts are also valuable. If it feels wrong, it probably is.



 

Privacy Policy | Contact Us
Copyright © 2011-2018 MPIP. All Rights Reserved.