Ethical concerns for using artificial intelligence chatbots in research and publication: Evidences from Saudi Arabia

Abstract

Artificial intelligence (AI) conversational generative chatbots have drawn the attention of academics and have been increasingly used in the scientific research process since the inauguration of ChatGPT in November 2022. Despite growing research on AI chatbots’ usage in research and publication, limited studies have deeply addressed the ethical concerns that arise from their usage. This research explores the perceptions of academics and their leaders regarding the use of AI chatbots in research and publication. It addresses the ethical dilemma and ethical approaches considered by academics and their leaders for shaping their decisions for the use or non-use of chatbots in scientific research. For these purposes, in-depth interviews were conducted with 21 academics/researchers and 11 leaders of scientific research in public universities in Saudi Arabia. The results of the thematic analysis confirmed that AI chatbots are extensively used in scientific research, albeit many researchers present their publications as their own work with no acknowledgement of the support from chatbots. The results showed ten interrelated ethical concerns, which would impact the growth of pseudoscience in developing countries if these concerns were not overcome. Hence, strategies for mitigating these ethical concerns are suggested. The research showed that academics often use chatbots based on a “utilitarian” approach, whereas most leaders consider the “virtue” or the “common good” approach for their concerns about chatbot adoption in scientific research. This research calls for policy and interventions from policymakers and other stakeholders about the responsible and ethical use of chatbots in research and publication.

https://doi.org/10.37074/jalt.2024.7.1.21
PDF

Downloads

Download data is not yet available.