The credibility of scientific literature has been somewhat undermined by the recent publishing (and subsequent redaction) of a paper that included some very obviously incorrect AI-generated material.
In theory, any scientific paper should be a highly trusted source of information. It would clearly be a challenging exercise for someone to make up scientific data that builds a coherent story, especially one robust enough to get past the reviewers in peer-reviewed journals. However, it’s not impossible; the ‘research’ that led to the claim that the MMR vaccine caused autism is possibly the most infamous example. But with the rise of AI, there is an increasing risk to scientific integrity, as results from a 2023 study have illustrated.
A recent article in Scientific American1 reports a study carried out by Andrew Gray from University College London, that “at least 60,000 papers – slightly more than 1 per cent of all scientific articles published globally last year – may have used an LLM” (LLM = Large Language Model = AI algorithm).
It is understandable that researchers, who are notoriously time poor, would want to use tools to help them speed up the process of writing papers, especially if they are not great at writing or if English isn’t their first language. However, even for such uses and even if they review and edit the AI content, for many scientists this is simply not acceptable, as scientific research is meant to be original thought and work by the authors.
It could be argued that the use of AI to aid with writers’ block is hardly a major crime, as long as the author reviews and edits accordingly, but there are two fundamental issues with this. Firstly, does the author and indeed do the peer reviewers read the resulting paper in detail and edit accordingly? And secondly, where does the use of AI stop? AI is perfectly capable of generating images and even making up complete data sets.
One of the key problems with AI is that it learns from published data, a lot of which is in itself, inaccurate. A second problem with AI is that it wants to please the operator and will generate an output to meet the criteria posed by the operator – it doesn’t check for accuracy and in fact, some of the AI output can be entirely made up. So, whether you are a scientist or a pest manager, you need to have enough knowledge to check that what is being generated by your AI tool is accurate.
This is clearly not happening.
There is no doubt, there are thousands of small examples of AI use. But then there are the obvious ones, which stick out like the proverbial dogs b****. Or in this case, they belong to a rodent.
In a recent paper published in Frontiers in Cell and Developmental Biology included a rather bizarre image of the reproductive anatomy of a male rat (Figure 1). It’s pretty difficult to describe what is going on here, but it is hard to understand how such an image, with the associated bizarre labelling, was submitted in the first place. But perhaps even more troubling is that it wasn’t picked up by the peer reviewers!

Which brings us to another concern: that AI may be being used by some reviewers during the peer review process to do the reviewing and generate the reviewers’ report.
Alas, in the scientific arena where performance is often measured on the number of publications – “publish or perish” as they say – the temptation to use AI to a greater or lesser degree is significant.
Indeed, AI is already capable of creating completely fictitious research on a novel topic. Researchers commissioned AI to create such a study and it provided hypothesis, methods, complex data, images, analysis and conclusions with aplomb.2 However, it did struggle to reference existing literature correctly.
Feeding off this need to “publish or perish”, there are also plenty of ‘predator journals’ that will take money off scientists to publish their work, with little of the necessary quality checks.
So, should we trust scientific papers moving forward? Whilst we can generally be trusting of scientific papers, it won’t take many high-profile examples of AI fraud to completely undermine this trust. Quality publishers are acutely aware of this and are reviewing their processes to make sure they only publish original (non-AI) work.
The problem for the lay person is that they will not be aware if the paper they are reading has been published in a quality journal. Checking to see if the journal only publishes peer-reviewed papers and whether it belongs to one of the big publishing houses is probably a good starting point.
Readers of Professional Pest Manager magazine can be assured that all magazine authored articles are written by old fashioned humans, either by one of the editorial team or a recognised industry expert.
References:
1 Stokel-Walker, C (2024). AI Chatbots Have Thoroughly Infiltrated Scientific Publishing. Scientific American. 331 (1)
2 Elbadawi, M et al. (2024). The role of artificial intelligence in generating original scientific research. International Journal of Pharmaceutics, 652. https://doi.org/10.1016/j.ijpharm.2023.123741
Image link:
https://www.frontiersin.org/files/Articles/1339390/fcell-11-1339390-HTML/image_m/fcell-11-1339390-g001.jpg