Strange IndiaStrange India


Closeup rear view of a cropped male scientist looking at DNA test results.

Journals are making an effort to detect manipulated images of the gels used to analyse proteins and DNA.Credit: Shutterstock

It seems that every month brings a fresh slew of high-profile allegations against researchers whose papers — some of them years old — contain signs of possible image manipulation.

Scientist sleuths are using their own trained eyes, along with commercial software based on artificial intelligence (AI), to spot image duplication and other issues that might hint at sloppy record-keeping or worse. They are bringing these concerns to light in places like PubPeer, an online forum featuring many new posts every day flagging image concerns.

Some of these efforts have led to action. Last month, for example, the Dana-Farber Cancer Institute (DFCI) in Boston, Massachusetts, said that it would ask journals to retract or correct a slew of papers authored by its staff members. The disclosure came after an observer raised concerns about images in the papers. The institute says it is continuing to investigate the concerns.

That incident was just one of many. In the face of public scrutiny, academic journals are increasingly adopting tricks and tools, including commercial AI-based systems, to spot problematic imagery before, rather than after, publication. Here, Nature reviews the problem and how publishers are attempting to tackle it.

What sorts of imagery problem are being spotted?

Questionable image practices include the use of the same data across several graphs, the replication of photos or portions of photos, and the deletion or splicing of images. Such issues can indicate an intent to mislead, but can also result from an innocent attempt to improve a figure’s aesthetics, for example. Nonetheless, even innocent mistakes can be damaging to the integrity of science, experts say.

How prevalent are these issues, and are they on the rise?

The precise number of such incidents is unknown. A database maintained by the website Retraction Watch lists more than 51,000 documented retractions, corrections or expressions of concern. Of those, about 4% flag a concern about images.

One of the largest efforts to quantify the problem was carried out by Elisabeth Bik, a scientific image sleuth and consultant in San Francisco, California, and her colleagues1. They examined images in more than 20,000 papers that were published between 1995 and 2014. Overall, they found that nearly 4% of the papers contained problematic figures. The study also revealed an increase in inappropriate image duplications starting around 2003, probably because digital photography made it easier to alter photos, Bik says.

Modern papers also contain more images than do those from decades ago, notes Bik. “Combine all of this with many more papers being published per day compared to ten years ago, and the increased pressure put on scientists to publish, and there will just be many more problems that can be found.”

The high rate of reports of image issues might also be driven by “a rise in whistleblowing because of the global community’s increased awareness of integrity issues”, says Renee Hoch, who works for the PLOS Publication Ethics team in San Francisco, California.

What happened at the Dana-Farber Cancer Institute?

In January, biologist and investigator Sholto David, based in Pontypridd, UK, blogged about possible image manipulation in more than 50 biology papers published by scientists at the DFCI, which is affiliated with Harvard University in Cambridge, Massachusetts. Among the authors were DFCI president Laurie Glimcher and her deputy, William Hahn; a DFCI spokesperson said they are not speaking to reporters. David’s blog highlighted what seemed to be duplications or other image anomalies in papers spanning almost 20 years. The post was first reported by The Harvard Crimson.

The DFCI, which had already been investigating some of these issues, is seeking retractions for several papers and corrections for many others. Barrett Rollins, the DFCI’s research-integrity officer, says that “moving as quickly as possible to correct the scientific record is important and a common practice of institutions with strong research integrity”.

“It bears repeating that the presence of image duplications or discrepancies in a paper is not evidence of an author’s intent to deceive,” she adds.

What are journals doing to improve image integrity?

In an effort to reduce publication of mishandled images, some journals, including the Journal of Cell Science, PLOS Biology and PLOS ONE, either require or ask that authors submit raw images in addition to the cropped or processed images in their figures.

Many publishers are also incorporating AI-based tools including ImageTwin, ImaCheck and Proofig into consistent or spot pre-publication checks. The Science family of journals announced in January it is now using Proofig to screen all its submissions. Holden Thorp, editor in chief of the Science family of journals, says Proofig has spotted things that led editors to decide against publishing papers. He says authors are usually grateful to have their errors identified.

What kinds of issues do these AI-based systems flag?

All these systems can, for example, quickly detect duplicates of images within the same paper, even if those images have been rotated, stretched or cropped or had their colour altered.

Different systems have different merits. Proofig, for example, can spot splices created by chopping out or stitching together portions of images. ImageTwin, says Bik, has the advantage of allowing users to cross-check an image against a large dataset of other papers. Some publishers, including Springer Nature, are developing their own AI image integrity software. (Nature’s news team is editorially independent of its publisher, Springer Nature.)

Many of the errors flagged by AI tools seem to be innocent. In a study of more than 1,300 papers submitted to 9 American Association for Cancer Research journals in 2021 and early 2022, Proofig flagged 15% as having possible image duplications that required follow-up with authors. Author responses indicated that 28% of the 207 duplications were intentional — driven, for example, by authors using the same image to illustrate multiple points. Sixty-three per cent were unintentional mistakes.

How well do these AI systems work?

Users report that AI-based systems definitely make it faster and easier to spot some kinds of image problems. The Journal of Clinical Investigation trialled Proofig from 2021–2022 and found that it tripled the proportion of manuscripts with potentially problematic images, from 1% to 3%2.

But they are less adept at spotting more complex manipulations, says Bik, or AI-generated fakery. The tools are “useful to detect mistakes and low-level integrity breaches, but that is but one small aspect of the bigger issue,” agrees Bernd Pulverer, chief editor of EMBO Reports. “The existing tools are at best showing the tip of an iceberg that may grow dramatically, and current approaches will soon be largely obsolete.”

Are pre-publication checks stemming image issues?

A combination of expert teams, technology tools and increased vigilance seems to be working — for the time being. “We have applied systematic screening now for over a decade and for the first time see detection rates decline,” says Pulverer.

But as image manipulation gets more sophisticated, catching it will become ever harder, he says. “In a couple of years all of our current image integrity screening will still be useful for filtering out mistakes, but certainly not for detecting fraud,” Pulverer says.

How can image manipulation best be tackled in the long run?

Ultimately, stamping out image manipulation will involve complex changes to how science is done, says Bik, with more focus on rigor and reproducibility, and repercussions for bad behaviour. “There are too many stories of bullying and highly demanding PIs spending too little time in their labs, and that just creates a culture where cheating is ok,” she says. “This needs to change.”



Source link

By AUTHOR

Leave a Reply

Your email address will not be published. Required fields are marked *