ORCID Profile
0000-0003-0623-0934
Current Organisations
University of Alberta
,
Sunnybrook Health Sciences Centre
,
University of Toronto Faculty of Medicine
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Scientific Foundation SPIROSKI
Date: 30-01-2023
DOI: 10.3889/OAMJMS.2023.11502
Abstract: Journals have begun to publish papers in which chatbots such as ChatGPT are shown as co-authors. The following WAME recommendations are intended to inform editors and help them develop policies regarding chatbots for their journals, to help authors understand how use of chatbots might be attributed in their work, and address the need for all journal editors to have access manuscript screening tools. In this rapidly evolving field, we expect these recommendations to evolve as well.
Publisher: Philippine Society of Otolaryngology-Head and Neck Surgery, Inc. (PSO-HNS)
Date: 04-06-2023
DOI: 10.32412/PJOHNS.V38I1.2135
Abstract: Introduction This statement revises our earlier “WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications” (January 20, 2023). The revision reflects the proliferation of chatbots and their expanding use in scholarly publishing over the last few months, as well as emerging concerns regarding lack of authenticity of content when using chatbots. These Recommendations are intended to inform editors and help them develop policies for the use of chatbots in papers published in their journals. They aim to help authors and reviewers understand how best to attribute the use of chatbots in their work, and to address the need for all journal editors to have access to manuscript screening tools. In this rapidly evolving field, we will continue to modify these recommendations as the software and its applications develop. A chatbot is a tool “[d]riven by [artificial intelligence], automated rules, natural-language processing (NLP), and machine learning (ML)…[to] process data to deliver responses to requests of all kinds.”1 Artificial intelligence (AI) is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”2 “Generative modeling is an artificial intelligence technique that generates synthetic artifacts by analyzing training ex les learning their patterns and distribution and then creating realistic facsimiles. Generative AI (GAI) uses generative modeling and advances in deep learning (DL) to produce erse content at scale by utilizing existing media such as text, graphics, audio, and video.”3, 4 Chatbots are activated by a plain-language instruction, or “prompt,” provided by the user. They generate responses using statistical and probability-based language models.5 This output has some characteristic properties. It is usually linguistically accurate and fluent but, to date, it is often compromised in various ways. For ex le, chatbot output currently carries the risk of including biases, distortions, irrelevancies, misrepresentations, and plagiarism many of which are caused by the algorithms governing its generation and heavily dependent on the contents of the materials used in its training. Consequently, there are concerns about the effects of chatbots on knowledge creation and dissemination – including their potential to spread and lify mis- and disinformation6 – and their broader impact on jobs and the economy, as well as the health of in iduals and populations. New legal issues have also arisen in connection with chatbots and generative AI.7 Chatbots retain the information supplied to them, including content and prompts, and may use this information in future responses. Therefore, scholarly content that is generated or edited using AI would be retained and as a result, could potentially appear in future responses, further increasing the risk of inadvertent plagiarism on the part of the user and any future users of the technology. Anyone who needs to maintain confidentiality of a document, including authors, editors, and reviewers, should be aware of this issue before considering using chatbots to edit or generate work.9 Chatbots and their applications illustrate the powerful possibilities of generative AI, as well as the risks. These Recommendations seek to suggest a workable approach to valid concerns about the use of chatbots in scholarly publishing.
Location: United States of America
Location: United Kingdom of Great Britain and Northern Ireland
Location: Spain
No related grants have been discovered for Edsel Ing.