Skip to content

ALPSP conference report: Open for AI and RI

Event duration September 16, 2024

ALPSP conference report: Open for AI and RI 

 

The Association of Learned and Professional Society Publishers (ALPSP) Annual Conference was held in Manchester from 11-13 September and brought together just under 300 delegates from across the academic publishing industry. As always it was great to chat with old and new connections and to listen to the sessions. The main themes of the conference were, unsurprisingly, open access, artificial intelligence and research integrity. What felt different to me at this conference was that it made me think about how those three topics are connected. 

 

Open Access and Generative AI 

In the opening keynote Jake Okechukwu Effoduh, Assistant Professor of Law at Toronto Metropolitan University, spoke about Scholarly Publishing in the Era of Open Access and Generative AI. On the one hand, GenAI tools like ChatGPT are accessible to the wider public, and not just the select few. On the other hand, content published OA is openly available for GenAI tools as training material, thus making it prone to misuse. In session 9 on Licensing Developments, Chris Bennett from Cambridge University Press went into more detail: content published under a CC-BY license can be reused, provided the original is attributed. However, the issue with current GenAI models like ChatGPT is that they do not quote sources in any results given, though Chris shared that CUP had been successful in negotiating two agreements where they insisted on attribution. Publishing open access under a CC-BY-ND license would protect the content from being reused, thus keeping it safe from GenAI training, while still meeting the main goal of opening reading access to the content. However, this would limit GenAI models’ training content to sources of lower quality, which in turn would make the results less accurate, and even more biased. If anything, Jake urged us as an industry to work with AI companies – for both open access and paywalled content - and consider closed-content tools that would be limited only to academic, credible sources, for example, and exclude non-trustworthy sources. Limiting the content from going in would be detrimental, instead publisher’s content should be leveraged in a safe and controlled environment, rather than kept isolated.  

 

Licensing content – both open access and paywalled – to AI companies can also be a new revenue stream for publishers and could help offset loss in revenues from the transition to open access, including subscriptions, licensing, journals archives, among other things, which was discussed in session 4: Three Months to Go: Are we ready for OA in 2025? Ben Ashcroft from DeGruyter Brill was clear that “the transition to OA is not effort or cost neutral – it is an investment in the future”. Ben broke down the effort that went into DeGruyter’s preparations for the move to S2O both in terms of infrastructure, and internal communications: to make the change successful, conversations had to happen with everyone involved, including journal managers, and Editors-in-Chief, across the board. Participants also learned that De Gruyter and Cambridge University Press are now mixing different open access business models within their portfolio to serve different stakeholders. While the idea that “no one model fits all” to transition to open access is not new, hearing about single publishers mixing different models seems to be a new development, though perhaps limited to publishers of a certain size, and less so for smaller organisation, as also discussed in Session 2 about smaller publishers. 

 

Open access and research integrity 

On the surface there seems to be a strong link between open access and research integrity issues, given that fully open access publishers like Hindawi and Frontiers have been a target for papermills and have had to retract thousands of papers. Are open access publishers more of a target for papermills? Perhaps so. But they should not be. Just because an article is published open access should not make it more likely to be fake or produced by a papermill. The business model about how a paper is published – paywalled, or open access – should be completely different from the editorial decisions about acceptance. However, this does not seem to be the case, at least for some publishers, since under an APC economy, publishing more articles means making more money; rushing through the process for economic reasons at the expense of research integrity has clearly been too tempting. We have seen news about editors leaving journals operating an open access model because they have been pushed to increase the article output and accept more articles. Similarly, if you take a look at the ESAC Publishers’ journal portfolio overview you can also see that for many publishers Gold OA growth sits neatly on top of other models, and not in exchange of it. On the other contrary, in Session 8a PLOS’s Emily Chenette shared that PLOS One had seen an increase in low quality submissions that do not even meet the bar of sound science for PLOS, and that PLOS’s rigorous processes and the fact that every paper published has been read by someone has helped address this. As she put it, “the role of quality should remain the same, and as an industry we need to put quality over profit.” 

 

Artificial intelligence and research integrity 

AI has amplified the RI issue, since GenAI models like ChatGPT have made it easier for anyone to have a machine write a fake paper. But we cannot blame AI for the publishing fake or low-quality papers. RI issues need to be solved by making sure that fake papers are not published. As Ian Mulvany, CTO at BMJ, put it during Session 8a: The Role of Human Editors in an AI World: “we need to stand behind what we publish – we need to set up the right kinds of editorial processes to do even more rigorous checks to make sure we keep the same high quality”.  AI-powered tools may well help with doing some of those checks, and during the conference participants heard about various such initiatives: the ALPSP innovation award nominations included Morressier’s Integrity Manager, Signals, as well as the winner, the Papermill Alarm by Clear Skies. Session 7a: Industry Updates included an update from Wiley’s Research Exchange Screening, F1000’s research integrity checks on their new VeriXiv verified preprint server for publishing Gates funded research, as well as NISO’s recommendations for retractions. 

 

While papermills relying on GenAI produce fake and incorrect content, not all AI use is negative, though there are limitations. In the opening keynote, Jake stressed the fact that any AI tools used in various contexts should be considered as an “assistant”: the person submitting the paper is ultimately responsible for the content and should never solely rely on the tool but use it to make better decisions. The same applies to AI tools used by publishing professionals. In a marketing context, it can be a great asset for pulling together data and delivering personalized messages to authors, as discussed during Session 6b: Make Better Decisions: Uncover Insights from your Marketing Data, but coming up with a creative marketing initiative to engage authors still requires human activity and thought. Speakers in Session 8a: The Role of Human Editors in an AI World acknowledged that AI-powered tools can help with various tasks, such as helping editors match reviewers to papers and check references, but that these tools should not be used on their own to make decision or take over jobs– instead they help empower humans to do their actual jobs and make better decisions.  

 

Ian Mulvany raised an interesting question by asking whether it mattered if an article had been produced with the help of AI or not. If the content is correct, then who cares? Does it matter, if it passes different checks, including peer review, and is indeed correct? On paper, perhaps it does not – but as James Butcher shared during our final discussion on this: what will happen to the process of learning while writing? The process itself helps to organize your thoughts and structure them. Personally, I do agree with this – as this strongly aligns with how I work myself – however different people have different learning styles and writing processes, and younger generations who grow up consuming digitally processed information will continue to learn in different ways, so this might well become the new normal. In the closing Session 11: What do Industry Leaders really think? Jake urged publishers to ask authors to disclose whether AI tools had been used or not: “it will help us to understand the black box that is AI. Its use will become normal but for now we’re still in a nascent stage.” He likened this era to the 60s where students had to disclose is they used a calculator in a maths test, and himself in 2000 when students would have to state if they used the internet or not. Now both are second nature and using AI tools will become the same for researchers and authors in future – and it will become normal for publishers not to question this. 

 

If your learning style is musical, and if you are concerned about how open access, artificial intelligence and research integrity might be solved in the future, then you might want to take a peek at the recording of Heather Staines’ time travelling musical. Charlie Rapple has also written a great write up in the Scholarly Kitchen. Taking part in this has certainly been the highlight of the conference for me!

×

Glossary Term Title