Click to Skip Ad
Closing in...

Journal pulls paper featuring AI-generated images of massive rat genitals

Published Feb 16th, 2024 3:38PM EST
Samples of Midjourney AI-generated images.
Image: Midjourney

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

A new paper featured in the journal Frontiers has been retracted. The retraction came after multiple users reported issues with the images and figures featured in the study. Titled Cellular Functions Of Spermatogonial Stem Cells in Relation to JAK/STAT Signaling Pathway, the study was available for less than a week. Its appearance in the journal has raised concerns about AI-generated images in scientific studies, and, likely, it won’t be the last.

There has been a growing concern around the use of AI-generated art overall, but the fact that these concerns are now transitioning to the scientific world is quite distressing. Mind you, the authors of the paper did include that the images were generated with Midjourney. Sure, that’s great. The problem, though, is those images are riddled with issues.

Not only is one of those images of a rat with massive genitals, but words within the images are greatly misspelled or just gibberish overall. You can see some of the most offending images, like the massive rat genitals, on X (formerly Twitter). Other images show gibberish when you zoom in on them.

For the purpose of not forcing everyone to see the terrible images included here, I’ve forgone embedding the image of the rat genitals. If you really want to see that, check out the screenshot captured by X user @DrCJ_Houldcroft.

Following the discovery of these AI-generated images, Frontiers added a note to the paper and retracted it. The journal also released an official apology, stating that the crowd-sourcing dynamic of open science had helped catch the error. Frontiers says that one of the reviewers on the study raised valid concerns and requested revisions from the authors, but the authors never responded to those requests.

It’s unclear why the article continued to the publishing block if the authors didn’t respond. All we know at the moment is that the appearance of AI-generated images in scientific studies is a huge cause for concern.

Because of how easy it is for people to claim AI-generated art as their own, we need stricter ways to test images than the current methods that many companies offer. ChatGPT is adding a special watermark to the image metadata, as well as physical watermarks, but not every AI-image generator does this.

Josh Hawkins has been writing for over a decade, covering science, gaming, and tech culture. He also is a top-rated product reviewer with experience in extensively researched product comparisons, headphones, and gaming devices.

Whenever he isn’t busy writing about tech or gadgets, he can usually be found enjoying a new world in a video game, or tinkering with something on his computer.