LARGE LANGUAGE MODEL AND HALLUCINATIONS: A BIBLIOMETRIC REVIEW

Authors

  • Nur Emma Mustaffa Quantity Surveying Department, Tunku Abdul Rahman University of Management and Technology, Malaysia
  • Ke En Lai Department of Quantity Surveying, Tunku Abdul Rahman University of Management and Technology
  • Norhazren Izatie Mohd Department of Quantity Surveying, Universiti Teknologi Malaysia, Malaysia
  • Fuziah Ismail Department of Quantity Surveying, Universiti Teknologi Malaysia, Malaysia
  • Nurshikin Mohamad Shukery Department of Quantity Surveying, Universiti Teknologi Malaysia, Malaysia
  • Siti Rahmah Omar Department of Landscape Architecture, Universiti Teknologi Malaysia, Malaysia
  • Hamidah Kamarden Department of Chemical Engineering, Universiti Teknologi Malaysia, Malaysia
  • Nur Hidayah Abd Rahman Management with Tourism Programme, Universiti Sains Islam Malaysia, Malaysia

DOI:

https://doi.org/10.35631/IJEPC.1059014

Keywords:

Large Language Modelling, LLM, Hallucination, Artificial Intelligence

Abstract

The rapid advancement and widespread adoption of Large Language Models (LLMs) have spurred increasing interest in understanding their capabilities and limitations, particularly the phenomenon of "hallucination"—the generation of plausible yet factually incorrect information. This bibliometric review aims to map the scientific landscape and research trends surrounding LLMs and hallucinations within the broader context of Artificial Intelligence (AI). Despite the growing relevance of these issues, the scholarly discourse remains fragmented, necessitating a comprehensive synthesis of the existing literature. To address this gap, we conducted a systematic search using the keywords “LLM,” “hallucination,” and “AI” across the Scopus database. The resulting dataset, comprising 513 relevant publications, was cleaned and standardised using OpenRefine. Further analysis was conducted using Scopus Analyser to identify publication trends, citation patterns, and prolific contributors. Meanwhile, VOSviewer software was employed to construct co-authorship networks, keyword co-occurrence maps, and thematic clusters. The analysis revealed a marked increase in publications post-2020, with a significant concentration of research in computer science, linguistics, and ethics. Keyword mapping highlighted emerging themes such as factual consistency, trustworthiness, and prompt engineering. Co-authorship networks revealed a growing yet still loosely connected research community. These findings suggest that while interest in LLM hallucinations is rising, there is a need for deeper interdisciplinary collaboration and more rigorous evaluation frameworks. This study provides a foundational overview of the current research landscape and identifies critical directions for future investigation, especially in mitigating hallucinations and enhancing the reliability of LLM-generated content.

Downloads

Download data is not yet available.

Downloads

Published

2025-08-07

How to Cite

Mustaffa, N. E., Lai , K. E., Mohd, N. I., Ismail, F., Mohamad Shukery, N., Omar, S. R., Kamarden, H., & Abd Rahman, N. H. (2025). LARGE LANGUAGE MODEL AND HALLUCINATIONS: A BIBLIOMETRIC REVIEW. INTERNATIONAL JOURNAL OF EDUCATION, PSYCHOLOGY AND COUNSELLING (IJEPC), 10(59). https://doi.org/10.35631/IJEPC.1059014