Home / Publications & News / Newsroom / The Physiologist Magazine / The Future Is Now

The Future Is Now

The rapid growth of generative AI such as ChatGPT could have profound effects on research and publishing.
By Lauren Arcuri

Feature-Art_AI_introEarlier this year, ChatGPT burst onto the scene, quickly becoming one of the hottest topics in science. As a form of generative artificial intelligence (AI), users input prompts and questions, and ChatGPT generates content such as emails, essays, blog posts and resumes.

Although AI has been around for decades, the rapid advancement, easy accessibility and fast adoption of ChatGPT created a lot of discussion and debate. This “disruptor” has the potential to change every type of industry in our society. In the scientific realm, it could revolutionize the way research is conducted and communicated—but it comes with significant risks and ethical concerns.

ChatGPT is a type of large language model (LLM) that uses a combination of machine learning tools and natural language processing. While users may feel like ChatGPT is “understanding” them, in reality it is simply “predicting” what comes next, says Phill Jones, PhD, co-founder of digital and technology at MoreBrains Cooperative. LLMs take a series of words and predict what the most likely word is to come next, based on the language data sets that the model has been trained on.

The Effect on Scholarly Publishing

Some scholars have compared this phase of generative AI to the birth of email. When email first became available, people thought it would replace the postal service. But the postal service survived, and email’s effects have stretched far beyond a replacement for paper mail, influencing everything about how we communicate. The implications of generative AI are likely to be similar as it transforms how we work, communicate and produce information. But experts say the specifics of those transformations are not fully crystallized yet.

“AI will not replace people. People will continue to be a necessary part of the publishing ecosystem. We will always need to be mindful of bad data and research being spread,” says Damita Snow, CAE, senior manager of publishing technologies and a publishing diversity, equity and inclusion specialist at the American Society of Civil Engineers. Snow has spent the past four years studying the ethics of AI usage. “It’s not that we can keep bad research from happening. It’s happening already. But we can surely try our best to mitigate harm.”

To help ensure the ethical use of AI in APS publications, APS has developed policies to mitigate the improper use of AI and AI-assisted tools. “These tools do not qualify for authorship and cannot be considered an author of any article published in APS journals,” says Christopher England, PhD, APS associate publications director, program development, policy and ethics. “Our policy at APS is also supported by the Council of Publishing Ethics.” 

Any AI-assisted tool must be properly referenced in the “materials and methods” section of an article if the tool was used as part of the design or performance of the experiments or in generating the conclusion. In other cases, the tool can be used for helping authors edit or write the manuscript. “In these situations, the author should mention it in the acknowledgements section,” England says. 

Authors for APS journals must list the specific AI tool used, the version, the process it was used for and the reason. They must also certify that it is used in a manner that does not conflict with APS ethical policies and take full responsibility for the content. “At any point in time, authors may be asked to supply the methods of the application, if not already specified,” England says. That includes syntax and query structure, such as the prompt used to generate the response.

Using AI tools to generate figures is a newly emerging and growing part of the field. As always, APS policy is that “it’s not acceptable to fabricate, alter or delete specific features within an image used in a manuscript,” England says. 

How Can AI Be Used?

There are many potential applications of LLMs such as ChatGPT in research that can streamline the publishing process. “Publishers are interested in using AI as a tool to enhance what they can offer and possibly [create] new products for new types of markets,” Jones says. “That’s something a lot of people are interested in due to the economic shifts in the publishing market.” 

For example, an LLM can take a research paper and create a high-quality abstract based on the key concepts and findings. It can also enhance structure and coherence of the manuscript, such as suggesting more effective organization of content and enhancing readability. And it can improve language and style, similar to Grammarly, an AI-based tool that has been in use for several years already and is an enhancement of word-processor-based grammar tools. Because of the speed with which AI can generate content, it can help disseminate research more quickly. 

“The obvious first place to look is processing the information that people have in their databases and the articles that they’ve published over the years and regenerating that information in different formats to make it accessible to different audiences,” Jones says. For example, ChatGPT could create a summary of a scientific research paper that is easier for a lay audience to understand. It could also help outline and edit articles in a much more powerful and simpler way.

“AI will allow publishers to gather even more data on their members and customers and to personalize their needs based on behavior,” Snow says. AI can also be used to collect and analyze search engine optimization and how content is being received by audiences. “It can help assess what subject areas they should focus on and what they may need to reevaluate,” she says.

AI tools “could be a huge resource to improve processes and efficiencies and make the submission process simpler for authors,” says AI ethicist Chhavi Chauhan, PhD, a molecular biologist and board member of the Digital Pathology Association. Similar to the autocomplete for online forms that is currently available, an AI-aided submission process could capture information critical for the publication in a way that is less taxing for the author. “It’s faster because it’s done in a more automated way, and it saves the editorial staff’s time, too,” she says.

Another way that generative AI may be helpful for publishing is to help generate the article text itself. This is where the ethical issues can get a bit sticky. “Maybe there’s a line to be drawn, but people have been using grammar and spelling correction in word processors for a long time and no one’s concerned about that,” Jones says. “The red flags have come up when we’re talking about generating large chunks of text that the authors may or may not review before submission. That’s what makes a lot of publishers nervous.”

Those concerns make many AI ethicists nervous, too. “Content synthesized by LLMs lacks scientific accuracy and logical flow, is not critical in terms of elaboration of data and various contexts, and is never, ever novel,” Chauhan says. “Generative AI cannot create content with deep, novel insights that a human author may offer based on their experience, failed experiments, conferences, talks, poster presentations, elevator conversations and the like. These experiences and insights can be deeply meaningful and instrumental in elevating the impact of any research.”

Also, once someone pastes content into an LLM, the content becomes a part of the data set and can then be used to synthesize the response for the next set of prompts by someone else, compromising confidentiality and novelty of the idea.

Ethical Risks Examined

The societal implications of using AI to generate content is an important discussion. Ethical issues span several areas, including rigor, reproducibility, bias, intellectual property, equity and diversity. 

One of the challenges with assessing the ethical application of AI is that most of the time, “AI is a black box,” says Georgios Kararigas, PhD, an APS member and professor of physiology at the University of Iceland. “We have an input, and we get an output. But how is this output generated? In many cases, this isn’t clear.”

Tools such as the Z-Inspection process that Kararigas co-developed, which evaluates AI tools to determine if they are trustworthy, can help make sense of a complicated set of questions around the ethics of AI.

The prevalence of paper mill papers—research that has been falsified to generate citations—is not new, but there is concern that AI will make it easier and cheaper to generate these papers. “And it could be harder to detect because the text may be more plausible than the old-fashioned method of generating a fake paper,” Kararigas says. “There’s also a concern about the level of sophistication” of the fake papers.

Another concern is research integrity. Chauhan says in the current “publish or perish” scholarly publishing world, quantity has been rewarded over quality and that propagation of false information, disinformation and misinformation is commonplace. A real concern is that generative AI may help spread this false information as LLMs are only as good as the data they’re trained on—and it’s not transparent what those data are. “We need to sharpen our skills further and equip ourselves with more effective tools to detect and mitigate scientific misconduct and maintain high-quality scientific rigor in all published content,” she says.

Inequity is yet another serious issue that may become more prominent in the age of AI. “I caution about the use of generative AI exacerbating the existing inequities while creating new ones that may arise from the digital divide,” Chauhan says. Countries’ AI regulations vary, and in some countries, including the U.S., there are no regulations at all.

“This is going to lead to a lack of diversity in the training data sets for some models. So, the outputs they will generate are not going to be the same. Some LLMs may be trained on more credible data sets than others,” Chauhan says. In addition, she warns that in the future, AI creators may decide to monetize the models, leading to inequity in access. “I would like to appeal and warn against dissecting scholarly publishing into the world of the haves and the have-nots.”

Looking to the Future

As generative AI and other AI-based tools are developed and the existing ones evolve, there are many possibilities for how they will be applied. Researchers say the future may take us in any number of directions. Generative AI has the power to streamline the process of generating text and information. But it must be used responsibly and ethically. If so, it holds massive transformative potential for the way we do research, work and communicate. 

Ethicists are calling for the creation of AI ethics boards and offices of research integrity at the institutional and national levels. They stress the need to focus and emphasize critical thinking skills for students, educators, researchers and reviewers. 

At APS, England says, “We will continue to monitor ethical issues related to AI and update our policies as needed. In addition, we will continue to explore ways to implement AI tools that are beneficial to our community, such as the recent integration of SciScore, a tool used to help our authors identify ways to improve the rigor and reproducibility of their work.” 


This article was originally published in the September 2023 issue of The Physiologist Magazine.

“We need to sharpen our skills further and equip ourselves with more effective tools to detect and mitigate scientific misconduct and maintain high-quality scientific rigor.”

Chhavi Chauhan, PhD
 

 

The Physiologist Magazine

Read the Latest Issue
Don’t miss out on the latest topics in science and research.

Cover_TPM_March24

View the Issue Archive
Catch up on all the issues of The Physiologist Magazine.

Contact Us
For questions, comments or to share your story ideas,  email us or call 301.634.7314.