AI-powered literature search: some observations and concerns

Main Article Content

Farooq Azam Rathore
Fareeha Farooq

Abstract

The recent easy availability and rapid rise of Artificial Intelligence (AI) tools in medical research, particularly in literature search, has revolutionized the way researchers and physicians’ access and analyse scientific information.1 These AI-powered tools offer numerous advantages, while conducting a literature search. These include rapid identification of relevant articles, literature mapping, generation of concise summaries, and the ability to process vast amounts of data in a short time.


A literature search primarily consists of three important components: search for the most recent and relevant articles, critical appraisal, and synthesizing the information.2,3 Due to the current boom in the AI-tools there are now tools available for each of these steps. A brief overview of some tools that perform these functions is mentioned in Table I.


However, in our role as physicians, faculty and researchers who have been using these tools for more than a year for various tasks including AI-powered literature search we have some concerns that need to be highlighted


Recommending articles from the predatory journals


We have noticed that some of the articles that these tools recommend are from predatory journals. Studies published in predatory journals that lack a rigorous peer review process are more prone to exhibit suboptimal standards of reporting methodologies and outcomes.  The research ethics committee approvals are also flawed compared to those published in reputable journals.4 This means that a researcher relying only in AI powered literature search may base their literature search on flawed, unethical, or even fabricated findings of these predatory journals which may then be cited in their own publications. This issue of infiltration of citation databases have been highlighted earlier by experts.5,6 We have concerns that if this goes unchecked during an AI -powered literature search it will potentially pollute the academic journals by low quality citations from the predatory journals and thus undermining research integrity


Inaccessible citations and sources


Another issue that we have commonly faced is that some recommended sources and articles are often inaccessible – either due to broken links or non-existent articles. This raises questions about the data sources utilized by these AI tools and the overall reliability of their outputs. Due to the black box nature of many AI systems especially deep learning models, it is unclear how these systems extract information from sources that are not readily available. This raises questions both about the quality and authenticity of the generated summaries and recommendations.


Lack of Transparency in article selection


Many AI tools offer summaries of the articles or recommend certain articles which may be different for other AI- tools. There is a lack of transparency and reproducibility in the article selection and summarization processes employed by these AI tools. It remains unclear for some search engines whether articles are chosen and recommended based on citation counts, impact factors, or other criteria. The algorithms behind AI tools may inherently approve of a certain type of content or styles of writing, which can affect the summaries produced.7 This data and algorithmic bias may be problematic for seasoned researchers who need to understand the basis for the information being generated by these AI tools. A lack of understanding of the mechanisms and algorithms can negatively influence their ability to critically evaluate the relevance and reliability of the AI-generated outputs.


AI Hallucinations


AI-generated summaries can produce "hallucinations," where the output includes information that is factually incorrect or not supported by the original text.8 This phenomenon poses a significant risk, as it can mislead readers and distort the understanding of the original research findings. Although, a disclaimer is often clearly displayed across most of the AI tools and websites, still researchers may be tempted to simply copy and paste the output without any critical evaluation.


Blind reliance on the output and the risk of misattribution


Since AI tools make the process of literature review and data extraction simple and easy, there is a growing tendency especially young researchers to blindly copy and paste the output and rely on it without critically examining it. Reading is one of the defining features of humankind and a critical appraisal of the literature and understanding the actual meanings of the words and science by reading the articles is an essential part of the physicians training.9 Those who tend to fall in the trap of generating and reading only summaries of the article will likely develop a superficial understanding which can be detrimental in the long term.


It is important to note that these concerns are not meant to discourage the use of AI in research. When used judiciously as a research assistant in a transparent manner, these tools can significantly enhance research productivity and quality of work. However, over-reliance or blind acceptance of AI outputs could lead to embarrassing situations or even jeopardize careers if incorrect information is propagated.


RECOMMENDATIONS
The use of AI tools for various research purposes and particularly literature search will likely increase in the future. Therefore, educational institutes, universities, faculty members and senior researchers should advocate ethical integration of AI tools in the research workflow.


We propose the following recommendations:



  1. Developers of AI-based literature search tools should prioritize transparency. They should clearly outline their article selection criteria and summarization algorithms. This should be clearly displayed at the appropriate place at the website.

  2. Institutions should provide comprehensive training to researchers and faculty members, as part of faculty development programs, emphasizing the importance of critical evaluation and ethical uses of AI-generated outputs.

  3. Journal editors and reviewers should be vigilant about the potential misuse of AI tools in literature reviews and demand clear documentation of search methodologies, whenever there is a suspicion or lack of clarity.

  4. Researchers should adopt a balanced approach while using AI tools. They should use it to enhance their efficiency while combining it with their own critical evaluation of the output and checking the primary source of information. They should appropriately acknowledge the use AI tools as per the journal’s policy.10

  5. Institutes and regulatory bodies in the country should formulate clear guidelines for the integration and ethical use of AI tools for different steps of research, writing and medical education. Some institutes like the Aga Khan University, Karachi have created their basic AI guidelines which can serve as a good starting point.11 Further research should be conducted to develop best practices for integrating AI tools in the research process, ensuring that they complement rather than replace human expertise.


We conclude by acknowledging the role AI tools can play in enhancing research productivity. However, it is important that we address the challenges and concerns outlines above to ensure the integrity and quality of scientific inquiry. By promoting a culture of critical thinking and responsible AI use, we can truly harness the power of these tools while safeguarding the foundations of evidence-based medicine.


 

Article Details

How to Cite
Rathore, Farooq Azam, and Fareeha Farooq. “AI-Powered Literature Search: Some Observations and Concerns”. KHYBER MEDICAL UNIVERSITY JOURNAL, vol. 16, no. 4, Dec. 2024, pp. 354-6, doi:10.35845/kmuj.2024.23740.
Section
Viewpoint
Author Biography

Farooq Azam Rathore, Armed Forces Institute of Rehabilitation Medicine (AFIRM), Rawalpindi, Pakistan

Graded Specialist in  Physical Medicine and Rehabilitation

References

1. Matthews D. Drowning in the literature? These smart software tools can help. Nature 2021;597(7874):141-2. https://10.1038/d41586-021-02346-4

2. Fink A. Conducting research literature reviews: from the Internet to paper. 5th ed. Thousand Oaks (CA): SAGE Publications; 2020.

3. Webster J, Watson RT. Analyzing the past to prepare for the future: writing a literature review on JSTOR. MIS Q. 2002 [Accessed on: July 29, 2024]. Available from URL: https://doi.org/4132319

4. Moher D, Shamseer L, Cobey KD, Lalu MM, Galipeau J, Avey MT, et al. Stop this waste of people, animals and money. Nature 2017;6;549(7670):23-5. https://10.1038/549023a

5. Severin A, Low N. Readers beware! Predatory journals are infiltrating citation databases. Int J Public Health 2019;64(8):1123-4. https://10.1007/s00038-019-01284-3

6. Cortegiani A, Manca A, Lalu M, Moher D. Inclusion of predatory journals in Scopus is inflating scholars' metrics and advancing careers. Int J Public Health 2020;65(1):3-4. https://10.1007/s00038-019-01318-w

7. Jain R, Jain A. Generative AI in writing research papers: a new type of algorithmic bias and uncertainty in scholarly work. ArXiv [Internet]. 2023 [Accessed on: July 29, 2024]. Available from URL: https://arxiv.org/abs/2312.10057

8. Hatem R, Simmons B, Thornton JE. A Call to Address AI "Hallucinations" and How Healthcare Professionals Can Mitigate Their Risks. Cureus 2023;5;15(9):e44720. https://10.7759/cureus.44720

9. Charon R, Hermann N, Devlin MJ. Close reading and creative writing in clinical education: teaching attention, representation, and affiliation. Acad Med 2016;1;91(3):345-50.

10. Inam M, Sheikh S, Minhas AMK, Vaughan EM, Krittanawong C, Samad Z, et al. A review of top cardiology and cardiovascular medicine journal guidelines regarding the use of generative artificial intelligence tools in scientific writing. Curr Probl Cardiol 2024;49(3):102387. https://10.1016/j.cpcardiol.2024.102387

11. The Aga Khan University. The use of generative ai in higher education at AKU (Draft Guidelines). [Accessed on: July 29, 2024]. Available from URL: https://www.aku.edu/admissions/Documents/policy-use-of-generative-ai.pdf

Similar Articles

<< < 2 > >> 

You may also start an advanced similarity search for this article.