Scattered clip art of computers, keyboards, gears, play buttons, hands on a keyboard set to the left of a box reading, "Coalition S: Making full and immediate Open Access a reality." Cover for their Proposal Towards Responsible Publishing.

cOAlition S Issues a New Proposal Towards Responsible Publishing

While October 31st can bring lots of scary sights and frights, researchers and scholars hoping to continue to seek out and create more equitable publishing models were given some food for thought via a new proposal from cOAlition S – a leading initiative in creating more thorough open access models that ensure equity, timely publication, and comprehensive peer review than what is currently offered traditional subscription based models of publishing.

In their proposal, the international consortium puts forward four main ways that Scholarly Communication needs to change in the direction of more thorough open access:

  • Publishing models are still highly inequitable
  • Research assets’ publication is needlessly delayed
  • There is a potential for peer review that is not being fully utilized
  • Editorial gatekeeping is at odds with academic career incentives and it is damaging to the sciences.

Organizations across the world are fighting for more widely open access – as not all open access publications have the same level of accessibility. As more and more models push for Diamond publication and institutions fight for more transformative deals with more rights for students and faculty, keeping in mind some of the above considerations as well will help ensure that scholars are getting what they need from their institutions.

As more and more organizations and higher education institutions begin to question the inherent power dynamics present in our current publishing models, publishers are attempting to both meet user demand and also preserve their prestigious reputations in order to maintain subscription models and profitability. Additionally, as open access players like MDPI fight off attacks on their consistently high publication volume, it will be more important than ever for authors conduct research on not only the journal they are submitting to but also the publisher; understanding how their academic contributions will affect academic discourse in their field, and perhaps signal to other aspiring authors which journals are the safest, most accessible places to publish research.

Open Access Week Panel

This year for Open Access Week, the Boston College Libraries in conjunction with the Schiller Institute for Integrated Science and Society, will be hosting a panel of speakers. The panel includes Raquel Muñiz, Assistant Professor in the Lynch School; Kelly Gray, a third-year English PhD student; and Bryan Ranger, Assistant Professor in Engineering. Raquel and Bryan have also been recipients of Open Access Publishing Fund Awards – opting to ensure that their research is published fully accessible for all – rather than behind paywalls.

This year’s event will be on October 24 at 4:30pm in 245 Beacon, Room 215. Light refreshments and desserts will be served! RSVP is not required. Navigate to the Boston College Events page find out more about the event.

Open Access week is an internationally celebrated annual tradition where professionals in the publishing industry come together to reflect on the inequities of publishing for profit models, and explore models that fund scholarship more equitably, with a smaller burden for researchers who may be coming from smaller institutions with the ability to cover the costs of massive subscription fees.

Sign for Open Access Week Panel: Join us for a discussion about the future of equitable publishing and interdisciplinarity in academia. Pictures of panelists Raquel Muniz, Kelly Gray and Bryan Ranger. October 24th, 4:30 pm, 245 Beacon, Room 215.

Also for Open Access week, look for a display coming up on the third floor near the circulation desk and a new digital display.

Open Journal Systems – Upgraded to Version 3.3; New Features Available for Faculty Journals

Early in the summer of 2023, Boston College Libraries upgraded our version of Open Journal Systems from 3.1.2 to version 3.3. Along with a much more intuitive dashboard and design options, the upgrade given faculty run journals to integrate with iThenticate’s “Turn-it-in” – an automated plagiarism checker. Our portfolio includes e-journals from a wide breadth of disciplines at Boston College and, indeed, the list continues to grow and grow.

In addition to providing an automated plagiarism checker, OJS:

  • provides automated indexation – to places like the Directory of Open Access Journals (DOAJ) and Google Scholar,
  • helps young researchers automatically connect their ORCID to their scholarly work,
  • automatically creates citations that are correctly formatted in a variety of different styles depending on the discipline,
  • preserves your journal in a private LOCKSS archive network,
  • and creates unique identifiers for every article – making them much more easy to find as scholars cite and visit your published issues.

Our journal collections can be found at ejournals.bc.edu – and we also have put together a series of data visualizations that demonstrate the international audience that many of our journals have reached with consistent outreach and accessibility best practices.

BC OJS Journals Performance

The snapshot above shows one of our Data Visualizations showing the readership of all of our OJS journals over the course of 2023. For the interactive graphic, be sure to visit our tableau visualizations!

Making the Most of Open Data

As part of Open Access Week, we want to recognize the critical role of open data on research and innovation. While the importance of sharing data is well-acknowledged, reusing open data and incorporating it into your research can be equally critical.

Why reuse data?

1. Efficiency and Time-Saving – Open data can accelerate the research process

2. Replicability and Validation – Reusing open data enables the replication of studies and validation research findings, which helps foster transparency and trust in scholarly work.

3. Ethics and Reducing Oversampling Burden – Certain populations, particularly marginalized and vulnerable populations, are sometimes overly sampled in research and reusing open data allows us to reduce the burden of oversampling. 

4. Cross-Disciplinary Insights – Researchers can blend data from different disciplines, encouraging cross-disciplinary collaboration.

How to approach data reuse?

1. Assess Quality & Compatibility – Check for completeness, biases, representation. Consider if the scope, variables, and collection methods align with your research objectives (e.g. if you were going to collect your own data, would you be doing it in the same way?)

2. Review Documentation – Part of quality assessment is reviewing metadata and understanding the collection methods and any cleaning processes that the data underwent. Need to examine licensing and ethical concerns around privacy and consent.  

3. Data Integration – Open data might not always be the best fit, but it can be thought of as supplementary or contextual data that will make your primary research data more robust.

Journal Policies and Artificial Intelligence

As developments continue in the world of generative artificial intelligence, more and more academic publishers are grappling with the reality of easily accessible AI tools, such as ChatGPT. Rather than take a strict approach to the use of AI, it is possible that a more realistic perspective is one that understands the potential value of generative AI for new students coming to a subject, but it should be clear that there are some problems with the service, and that not all of the information is accurate – in particular, many generative AI products have been known to actually generate fake or incorrect citations when prompted.

As a result of the popularity of some of the new AI tools – and with the understanding that access to them is hard to limit or manage, many publishers are coming up with explicit policies around the use AI in scholarship. See below on some examples of the types of language that can be found across the publishing spectrum. If you are the editor of a journal here at BC and are curious about the best practices about forming this type of policy, be sure to reach out to the Scholarly Communications team here at the Boston College Libraries.

Nature

Nature is a well-known and renowned international science and technology journal – their policy clearly state the relationship that authors need to have for the content that they find using large language models (LLMs) including generative artificial intelligence models, such as ChatGPT. The following two paragraphs are taken from an editorial published in January, when Nature added this language to author guidelines.

“First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.

Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM.”

JAMA Network

The Journal of the American Medical Association published an editorial in January of 2023 speaking to the use of non-human authors:

Author Responsibilities

Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.

If these models or tools are used to create content or assist with writing or manuscript preparation, authors must take responsibility for the integrity of the content generated by these tools. Authors should report the use of artificial intelligence, language models, machine learning, or similar technologies to create content or assist with writing or editing of manuscripts in the Acknowledgment section or the Methods section if this is part of formal research design or methods.

This should include a description of the content that was created or edited and the name of the language model or tool, version and extension numbers, and manufacturer. (Note: this does not include basic tools for checking grammar, spelling, references, etc.)

Reproduced and Re-created Material

The submission and publication of content created by artificial intelligence, language models, machine learning, or similar technologies is discouraged, unless part of formal research design or methods, and is not permitted without clear description of the content that was created and the name of the model or tool, version and extension numbers, and manufacturer. Authors must take responsibility for the integrity of the content generated by these models and tools.

Image Integrity

The submission and publication of images created by artificial intelligence, machine learning tools, or similar technologies is discouraged, unless part of formal research design or methods, and is not permitted without clear description of the content that was created and the name of the model or tool, version and extension numbers, and manufacturer. Authors must take responsibility for the integrity of the content generated by these models and tools.

WAME

Later in 2023, the World Association of Medical Editors also established a set of recommendations around using generative AI – specifically referring to Chatbots – such as ChatGPT.

Chatbots are activated by a plain-language instruction, or “prompt,” provided by the user. They generate responses using statistical and probability-based language models. (5) This output has some characteristic properties. It is usually linguistically accurate and fluent but, to date, it is often compromised in various ways. For example, chatbot output currently carries the risk of including biases, distortions, irrelevancies, misrepresentations, and plagiarism – many of which are caused by the algorithms governing its generation and heavily dependent on the contents of the materials used in its training. Consequently, there are concerns about the effects of chatbots on knowledge creation and dissemination – including their potential to spread and amplify mis- and disinformation (6) – and their broader impact on jobs and the economy, as well as the health of individuals and populations. New legal issues have also arisen in connection with chatbots and generative AI. (7)

Published in May of 2023, this statement seeks to clarify the recommendation beyond a similar January announcement, and while publishers and editors are starting to figure out how AI fits into the larger picture of scholarly publishing, remaining vigilant about any changes in technology will be a key piece for journal managers and editors.

What does the future hold?

Ultimately with improved technology – whether that was the ability to copy printed pages, access content digitally – there is always a period of fear and excitement as the models for publication and accessibility may be threatened by a new model; no matter the pitfalls of the new generative AI technology – or the successes – it will be important for journal editors and scholars submitting their work to understand the policies around using generative AI and whether or not consulting it may be accepted for a given publication.