Springer Nature, the world’s largest academic publisher, announced that AI writing tools like ChatGPT cannot be credited as authors in papers published in its journals. However, it says it has no qualms with scientists using them to write or generate ideas for research as long as the authors disclose their role.
Magdalena Skipper, editor-in-chief of Springer Nature’s flagship publication, Nature, explains that this clarification was necessary due to the recent explosion of LLM tools, including ChatGPT, in the scientific community.
“This new generation of LLM tools has really exploded into the community, which is rightly excited and playing with them, but [also] using them in ways that go beyond how they can genuinely be used at present,” says Skipper.
While ChatGPT and earlier large language models have been named as authors in a small number of published papers and pre-prints, their contribution varies case by case.
Also read:
–ChatGPT Pro: Everything You Need To know About ChatGPT Professional
–How Google Plans To Fend Off ChatGPT Threat
–See What Microsoft And OpenAI Gain From Extending Partnership.
In one case, when ChatGPT was used to argue for a certain drug in the context of Pascal’s wager, the AI-generated text was clearly labeled. However, in another preprint paper, the author only acknowledgement of the bot’s contribution by saying it ‘contributed to the writing of several sections of this manuscript.’
Reaction in the scientific community to papers crediting ChatGPT as an author have been mostly negative. The main arguments against giving AI authorship are that it can’t fulfill some required duties, can’t be meaningfully accountable for a publication, can’t claim intellectual property rights for its work and can’t correspond with other scientists and with the press to explain and answer questions on its work.
While there seems to be an agreement on not crediting AI as an author, there is less clarity on the use of AI tools to write a paper, even with proper acknowledgement. This is in part due to several problems with the output of these tools, such as amplifying social biases and even generating wrong details.
Organisations like Springer Nature believe that banning AI tools in scientific work would not work. So, they believe that the scientific community needs to come together to iron out new norms for disclosure and guardrails for safety instead. Skipper notes that AI tools can have positive uses, such as helping researchers for whom English is not their first language, and that it’s important to focus on these potential uses while also keeping potential misuses in check.
For your daily dose of tech, lifestyle, and trending content, make sure to follow Plat4om on Twitter @Plat4omLive, on Instagram @Plat4om, on LinkedIn at Plat4om, and on Facebook at Plat4om. You can also email us at info@plat4om.com and join our channel on Telegram at Plat4om. Finally, don’t forget to subscribe to OUR YOUTUBE CHANNEL.