Category Archives: Publishing

Journal Policies and Artificial Intelligence

As developments continue in the world of generative artificial intelligence, more and more academic publishers are grappling with the reality of easily accessible AI tools, such as ChatGPT. Rather than take a strict approach to the use of AI, it is possible that a more realistic perspective is one that understands the potential value of generative AI for new students coming to a subject, but it should be clear that there are some problems with the service, and that not all of the information is accurate – in particular, many generative AI products have been known to actually generate fake or incorrect citations when prompted.

As a result of the popularity of some of the new AI tools – and with the understanding that access to them is hard to limit or manage, many publishers are coming up with explicit policies around the use AI in scholarship. See below on some examples of the types of language that can be found across the publishing spectrum. If you are the editor of a journal here at BC and are curious about the best practices about forming this type of policy, be sure to reach out to the Scholarly Communications team here at the Boston College Libraries.

Nature

Nature is a well-known and renowned international science and technology journal – their policy clearly state the relationship that authors need to have for the content that they find using large language models (LLMs) including generative artificial intelligence models, such as ChatGPT. The following two paragraphs are taken from an editorial published in January, when Nature added this language to author guidelines.

“First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.

Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM.”

JAMA Network

The Journal of the American Medical Association published an editorial in January of 2023 speaking to the use of non-human authors:

Author Responsibilities

Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.

If these models or tools are used to create content or assist with writing or manuscript preparation, authors must take responsibility for the integrity of the content generated by these tools. Authors should report the use of artificial intelligence, language models, machine learning, or similar technologies to create content or assist with writing or editing of manuscripts in the Acknowledgment section or the Methods section if this is part of formal research design or methods.

This should include a description of the content that was created or edited and the name of the language model or tool, version and extension numbers, and manufacturer. (Note: this does not include basic tools for checking grammar, spelling, references, etc.)

Reproduced and Re-created Material

The submission and publication of content created by artificial intelligence, language models, machine learning, or similar technologies is discouraged, unless part of formal research design or methods, and is not permitted without clear description of the content that was created and the name of the model or tool, version and extension numbers, and manufacturer. Authors must take responsibility for the integrity of the content generated by these models and tools.

Image Integrity

The submission and publication of images created by artificial intelligence, machine learning tools, or similar technologies is discouraged, unless part of formal research design or methods, and is not permitted without clear description of the content that was created and the name of the model or tool, version and extension numbers, and manufacturer. Authors must take responsibility for the integrity of the content generated by these models and tools.

WAME

Later in 2023, the World Association of Medical Editors also established a set of recommendations around using generative AI – specifically referring to Chatbots – such as ChatGPT.

Chatbots are activated by a plain-language instruction, or “prompt,” provided by the user. They generate responses using statistical and probability-based language models. (5) This output has some characteristic properties. It is usually linguistically accurate and fluent but, to date, it is often compromised in various ways. For example, chatbot output currently carries the risk of including biases, distortions, irrelevancies, misrepresentations, and plagiarism – many of which are caused by the algorithms governing its generation and heavily dependent on the contents of the materials used in its training. Consequently, there are concerns about the effects of chatbots on knowledge creation and dissemination – including their potential to spread and amplify mis- and disinformation (6) – and their broader impact on jobs and the economy, as well as the health of individuals and populations. New legal issues have also arisen in connection with chatbots and generative AI. (7)

Published in May of 2023, this statement seeks to clarify the recommendation beyond a similar January announcement, and while publishers and editors are starting to figure out how AI fits into the larger picture of scholarly publishing, remaining vigilant about any changes in technology will be a key piece for journal managers and editors.

What does the future hold?

Ultimately with improved technology – whether that was the ability to copy printed pages, access content digitally – there is always a period of fear and excitement as the models for publication and accessibility may be threatened by a new model; no matter the pitfalls of the new generative AI technology – or the successes – it will be important for journal editors and scholars submitting their work to understand the policies around using generative AI and whether or not consulting it may be accepted for a given publication.

Mass-Resignation in Response to “Greed” at Elsevier

“Elsevier preys on the academic community, claiming huge profits while adding little value to science.” Professor Chris Chambers makes his thoughts clear as one of the contingent of one of the over 40 leading scientists who have resigned from the editorial board of NeuroImage in response to Elsevier’s arbitrarily high costs of publication – despite the fact that, as Chambers makes clear, the company adds little value to science.

Unfortunately, while this mass-resignation sends a clear message regarding the feelings of many academics, Elsevier does not stand alone – as publishers are able to keep their profit margins extremely high just based on the current business model. Universities and Institutions sponsor research – at not cost to the publisher, academics generally sit on editorial boards, conduct reviews, and collate the issues and prepare them for publication at zero or minimal cost to the publisher; but for authors and researchers, the cost comes either in the form of consistently increasing Article Processing Charges, or subscription plans that can be unsustainable for individual researchers or labs, and often take up a massive portion of a library’s yearly acquisition budget.

NeuroImage‘s most recent cover – the current cost to publish in NeuroImage is $3450, which is more than double the average APC as of 2021.

Ultimately, despite some of the developments urging academics and publishers to support independent, open publication practices, reputations of well-known established publishers allow them to raise APCs to prices that would otherwise be completely uncompetitive and inaccessible. This mass-resignation could be the type of action that could leverage big publishers into keeping prices for publication for open access work more reasonable, as the threat of rival journals with the reputations of the new, formerly Elsevier editorial teams could present a threat to their business model.

More recently this summer, the editorial board of Elsevier’s Design Studies also left the journal in mass in response to removal of the editor-in-chief Peter Lloyd, fairly nakedly in response to his resistance to rapidly increasing the amount of articles accepted by the journal each month. One of the editors involved in the walkout, Linden Ball, who is a board member and professor of cognitive psychology at the University of Central Lancashire, reported that board members were “‘appalled’ by Elsevier’s decision to remove Lloyd.” He continued to Inside Higher Ed, “‘This focus on the quantity of published articles rather than their quality appears to be purely motivated by a desire for large profits.‘”

In a moment where the future modes of publication and the ethics of those modes are constantly being scrutinized, the editorial boards of journals published by Elsevier and companies like it hold unique power in there ability to shut down production and point authors and editors to methods of publishing that are more equitable and accessible; motivated by the reputations of the editors and authors involved, rather than the profit motive of a large publishing company.

Standards Established for Peer-Review Terminology

In June of 2023, the National Information Standard’s Organization (NISO) published a set of standard terminology for peer-review. The standards cover a variety of topics mostly pertaining to how reviewers can refer to their interactions specifically and clearly. In 2020, a BioMed Central article was published citing common confusion around the use of the term “double-blind” due to its being vague and unspecific. This study concluded, that among other things, “the term ‘double-blind’ (and its derivatives, single- and triple-blind, fully blind, and partially blind or masked) has no standard or widely accepted definition. Agreement about which groups are blinded is poor, and authors using these terms often do not identify which groups were blinded, despite specific reporting guidelines to the contrary” (Lang, Stroup; 2020). In order to combat this, NISO working groups and committees have come together to publish standard definitions that should be observed in order to portray a journal’s peer-review process as explicitly and clearly as possible. The table below addresses a more specific, and sensitive, way to address the different levels of anonymity in a review.

TypeDescription
all identities visibleReviewer identity is visible to author, author identity is visible to
reviewer, reviewer and author identity is visible to (decision-making)
editor
single anonymizedReviewer identity is not made visible to author, author identity is
visible to reviewer, reviewer and author identity is visible to
(decision-making) editor
double anonymizedReviewer identity is not made visible to author, author identity is not
made visible to reviewer, reviewer and author identity is visible to
(decision-making) editor
triple anonymizedReviewer identity is not made visible to author, author identity is not
made visible to reviewer, reviewer & author identity is not made
visible to (decision-making) editor

With more clearly defined terminology, readers can be more informed about precisely what type of review was carried out to publish an article of interest. The set of standards also included a suggested set of metadata pertaining to the review process that it is always good to be sure to collect. This list included dates of submission, acceptance, and publication; whether the submission was “fast-tracked;” the number of reviewers involved, the number of revision rounds, and disclosure of what technical tools were used in the review process.

To find out more about this standard and specific language on a variety of peer-review terms, be sure to navigate to the NISO Standards. For more information on standards, you can also visit the Boston College Libraries’ Engineering LibGuide on standards organizations, which help to create and publish standards for a variety of disciplines.