Journal Policies and Artificial Intelligence

As developments continue in the world of generative artificial intelligence, more and more academic publishers are grappling with the reality of easily accessible AI tools, such as ChatGPT. Rather than take a strict approach to the use of AI, it is possible that a more realistic perspective is one that understands the potential value of generative AI for new students coming to a subject, but it should be clear that there are some problems with the service, and that not all of the information is accurate – in particular, many generative AI products have been known to actually generate fake or incorrect citations when prompted.

As a result of the popularity of some of the new AI tools – and with the understanding that access to them is hard to limit or manage, many publishers are coming up with explicit policies around the use AI in scholarship. See below on some examples of the types of language that can be found across the publishing spectrum. If you are the editor of a journal here at BC and are curious about the best practices about forming this type of policy, be sure to reach out to the Scholarly Communications team here at the Boston College Libraries.

Nature

Nature is a well-known and renowned international science and technology journal – their policy clearly state the relationship that authors need to have for the content that they find using large language models (LLMs) including generative artificial intelligence models, such as ChatGPT. The following two paragraphs are taken from an editorial published in January, when Nature added this language to author guidelines.

“First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.

Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM.”

JAMA Network

The Journal of the American Medical Association published an editorial in January of 2023 speaking to the use of non-human authors:

Author Responsibilities

Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.

If these models or tools are used to create content or assist with writing or manuscript preparation, authors must take responsibility for the integrity of the content generated by these tools. Authors should report the use of artificial intelligence, language models, machine learning, or similar technologies to create content or assist with writing or editing of manuscripts in the Acknowledgment section or the Methods section if this is part of formal research design or methods.

This should include a description of the content that was created or edited and the name of the language model or tool, version and extension numbers, and manufacturer. (Note: this does not include basic tools for checking grammar, spelling, references, etc.)

Reproduced and Re-created Material

The submission and publication of content created by artificial intelligence, language models, machine learning, or similar technologies is discouraged, unless part of formal research design or methods, and is not permitted without clear description of the content that was created and the name of the model or tool, version and extension numbers, and manufacturer. Authors must take responsibility for the integrity of the content generated by these models and tools.

Image Integrity

The submission and publication of images created by artificial intelligence, machine learning tools, or similar technologies is discouraged, unless part of formal research design or methods, and is not permitted without clear description of the content that was created and the name of the model or tool, version and extension numbers, and manufacturer. Authors must take responsibility for the integrity of the content generated by these models and tools.

WAME

Later in 2023, the World Association of Medical Editors also established a set of recommendations around using generative AI – specifically referring to Chatbots – such as ChatGPT.

Chatbots are activated by a plain-language instruction, or “prompt,” provided by the user. They generate responses using statistical and probability-based language models. (5) This output has some characteristic properties. It is usually linguistically accurate and fluent but, to date, it is often compromised in various ways. For example, chatbot output currently carries the risk of including biases, distortions, irrelevancies, misrepresentations, and plagiarism – many of which are caused by the algorithms governing its generation and heavily dependent on the contents of the materials used in its training. Consequently, there are concerns about the effects of chatbots on knowledge creation and dissemination – including their potential to spread and amplify mis- and disinformation (6) – and their broader impact on jobs and the economy, as well as the health of individuals and populations. New legal issues have also arisen in connection with chatbots and generative AI. (7)

Published in May of 2023, this statement seeks to clarify the recommendation beyond a similar January announcement, and while publishers and editors are starting to figure out how AI fits into the larger picture of scholarly publishing, remaining vigilant about any changes in technology will be a key piece for journal managers and editors.

What does the future hold?

Ultimately with improved technology – whether that was the ability to copy printed pages, access content digitally – there is always a period of fear and excitement as the models for publication and accessibility may be threatened by a new model; no matter the pitfalls of the new generative AI technology – or the successes – it will be important for journal editors and scholars submitting their work to understand the policies around using generative AI and whether or not consulting it may be accepted for a given publication.