On Papers Written Using Large Language Models


There’s an interesting preprint on arXiv by Andrew Gray entitled ChatGPT “contamination”: estimating the prevalence of LLMs in the scholarly literature that tries to estimate how many research articles there are out there that have been written with the help of Large Language Models (LLMs) such as ChatGPT. The abstract of the paper is:

The use of ChatGPT and similar Large Language Model (LLM) tools in scholarly communication and academic publishing has been widely discussed since they became easily accessible to a general audience in late 2022. This study uses keywords known to be disproportionately present in LLM-generated text to provide an overall estimate for the prevalence of LLM-assisted writing in the scholarly literature. For the publishing year 2023, it is found that several of those keywords show a distinctive and disproportionate increase in their prevalence, individually and in combination. It is estimated that at least 60,000 papers (slightly over 1% of all articles) were LLM-assisted, though this number could be extended and refined by analysis of other characteristics of the papers or by identification of further indicative keywords.

Andrew Gray, arXiv:2403.16887

The method employed to make the estimate involves identifying certain words that LLMs seem to love, of which usage has increased substantially since last year. For example, twice as many papers call something “intricate” nowadays compared to the past; there are also increases in the use of the words “commendable” and “meticulous”.

I found this a commendable paper, which is both meticulous and intricate. I encourage you to read it.

P.S. I did not use ChatGPT to write this blog post.

Leave a Comment