Abstract
Purpose: This study conducts an integrative literature review on the role of artificial intelligence (AI) in scientific research, examining both its transformative potential and associated risks. The aim is to identify strategic priorities and governance mechanisms necessary to ensure ethical and epistemologically sound use of AI in science.
Design/Methodology/Approach: The review synthesizes findings from thirteen high-impact academic and institutional sources published between 2022 and 2024, including empirical studies, policy briefs, and conceptual analyses. Thematic content analysis was employed to extract core issues related to the epistemological, ethical, and operational dimensions of AI integration in research.
Findings: The results reveal three major areas of concern: (i) the illusion of understanding generated by AI tools; (ii) ethical risks related to bias, fraud, and over-automation; and (iii) governance gaps in publishing and scientific evaluation. Conversely, opportunities include increased efficiency, hypothesis generation, and broader access to knowledge. Five strategic priorities were identified to guide responsible AI integration in science.
Practical Implications: This study provides actionable insights for journal editors, policymakers, and researchers to establish living guidelines, strengthen human-AI collaboration, and prevent epistemic monocultures. Training and governance frameworks are critical to mitigating misuse and fostering innovation.
Originality/Value: By integrating diverse sources, this review contributes to the debate on how to harness AI's potential in science without undermining critical thinking, scientific integrity, or academic diversity.
References
Chubb, J., & Cowling, P. I. (2023). What ChatGPT and generative AI mean for science. Nature Human Behaviour, 7(1), 1–2. https://doi.org/10.1038/s41562-023-01633-0
Else, H. (2023). Abstracts written by ChatGPT fool scientists. Nature. https://www.nature.com/articles/d41586-023-00056-7
Else, H. (2023). How ChatGPT and other AI tools could disrupt scientific publishing. Nature, 613(7945), 620–621. https://doi.org/10.1038/d41586-023-00191-1
Gao, C. A., Howard, F. M., Markov, N. S., Dyer, E. C., Ramesh, S., Luo, Y., & Pearson, A. T. (2023). Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers. NPJ Digital Medicine, 6, 75. https://doi.org/10.1038/s41746-023-00797-7
Heidt, A. (2023). Artificial-intelligence search engines wrangle academic literature. The Scientist. https://www.the-scientist.com/news-opinion/artificial-intelligence-search-engines-wrangle-academic-literature-71094
Himmelstein, D. S., Llorens, A., Jensen, L. J., & Sharan, R. (2024). Living guidelines for generative AI—why scientists must oversee its use. Nature, 625(7990), 693–696. https://doi.org/10.1038/d41586-024-00196-4
Messeri, L., & Crockett, C. (2024). Doing more, but learning less: The risks of AI in research. Yale News. https://news.yale.edu/2024/03/06/doing-more-learning-less-risks-ai-research
National Academies of Sciences, Engineering, and Medicine. (2023). Hurdles for AI for scientific discovery. The National Academies Press. https://doi.org/10.17226/27040
National Academies of Sciences, Engineering, and Medicine. (2023). Next steps for AI for scientific discovery. The National Academies Press. https://doi.org/10.17226/27041
OECD Global Science Forum. (2024). Fundamentals of AI in scientific research. Organisation for Economic Co-operation and Development. https://www.oecd.org
Park, Y., Bender, E. M., & Kohane, I. S. (2023). The Nobel Turing Challenge: Creating the engine for scientific discovery. Nature Machine Intelligence, 5(2), 83–88. https://doi.org/10.1038/s42256-023-00640-6
Stamos, D. N., & Weatherall, J. O. (2023). Artificial intelligence and illusions of understanding in scientific research. Philosophy of Science. https://doi.org/10.1017/psa.2023.77
Stilgoe, J. (2023). Five priorities for research on the risks of AI to science. Nature, 619(7970), 25–27. https://doi.org/10.1038/d41586-023-02207-1