The Ethics of Disclosing the Use of Artificial Intelligence Tools in Writing Scholarly Manuscripts

– Banning LLMs would be a mistake; enforcing a ban is difficult and may lead to undisclosed use.
– LLMs have useful applications in writing and editing, promoting diversity and inclusion in scholarship.
– Policies should prioritize transparency, accountability, fair credit allocation, and integrity regarding LLM use.

The use of LLMs, such as ChatGPT, to write, review and edit scholarly manuscripts presents challenging ethical issues for researchers and journals. We argue that banning the use of LLMs would be a mistake because a ban would not be enforceable and would encourage undisclosed use of LLMs. Also, since LLMs can have some useful applications in writing and editing text (especially for those conducting research in a language other than their first language), banning them would not support diversity and inclusion in scholarship.

The most reasonable responseto the dilemmas posed by LLMs is to develop policies that promote transparency,
accountability, fair allocation of credit, and integrity. The use of LLMs should be disclosed through (1) free-text in the introduction or methods section, (2) in-text citations and references, (3) supplementary materials or appendices. LLMs should not be named as authors or credited in the acknowledgments section because they lack free will and cannot be held morally or legally responsible.

Autor del artículo:

Hosseini, Mohammad, David B Resnik, y Kristi Holmes

Fuente:

Research Ethics

Tipo :

Noticia

Fecha de publicación :

15/06/2023

Seleccionado por:

Los sitios web de las Bibliotecas de la Universidad Europea hacen uso de las cookies propias y de terceros para ofrecerle un mejor servicio. política de cookies.

ACEPTAR
Aviso de cookies
es_ES
Scroll al inicio