HALLUCINATIONS IN LARGE LANGUAGE MODELS: A COMPREHENSIVE STUDY

Authors

  • Prof. More Santosh.S Author
  • Bhagwan Misal Author

Keywords:

Large Language Models, Hallucination, AI Reliability, Code Generation, Retrieval-Augmented Generation, AI Ethics, Model Detection.,,

Abstract

Large Language Models (LLMs) have achieved remarkable success across various domains including 
natural language processing, code generation, content creation, and decision support systems. However, one 
of the most critical challenges affecting their reliability is hallucination — the generation of factually 
incorrect, fabricated, or misleading information. 
This research study examines the phenomenon of hallucinations in Large Language Models, focusing on 
their types, causes, detection mechanisms, and mitigation strategies. The study explores intrinsic, 
extrinsic, and domain-specific hallucinations, particularly in software code generation. Furthermore, it 
analyzes the role of training data, probabilistic token prediction, reasoning limitations, and context
awareness constraints in contributing to hallucinations. 

,

References

Arteaga, Gabriel Y., Thomas B. Schön, and Nicolas Pielawski. "Hallucination Detection in

LLMs: Fast and Memory-Efficient Fine-Tuned Models." Proceedings of the 6th Northern Lights

Deep Learning Conference (NLDL), PMLR 265 (2025).

Cleti, Meade, and Pete Jano. "Hallucinations in LLMs: Types, Causes, and Approaches

for Enhanced Reliability." Preprint (2024).

Liu, Fang, Yang Liu, Lin Shi, Houkun Huang, Ruifeng Wang, Zhen Yang, Li Zhang, Zhongqi

Li, and Yuchi Ma. "Exploring and Evaluating Hallucinations in LLM-Powered Code

Generation." arXiv preprint arXiv:2404.00971v2 (2024).

Sriramanan, Gaurang, Siddhant Bharti, Shoumik Saha, Priyatham Kattakinda, Vinu Sankar

Sadasivan, and Soheil Feizi. "LLM-Check: Investigating Detection of Hallucinations in Large

Language Models." 38th Conference on Neural Information Processing Systems (NeurIPS

(2024).

Zhang, Ziyao, Chong Wang, Yanlin Wang, Ensheng Shi, Yuchi Ma, Wanjun Zhong, Jiachi

Chen, Mingzhi Mao, and Zibin Zheng. "LLM Hallucinations in Practical Code Generation:

Phenomena, Mechanism, and Mitigation." Proceedings of the ACM on Software

Engineering, Vol. 2, No. ISSTA, Article ISSTA022 (2025).

Downloads.

Published

2026-03-20

How to Cite

HALLUCINATIONS IN LARGE LANGUAGE MODELS: A COMPREHENSIVE STUDY. (2026). Phoenix: International Multidisciplinary Research Journal ( Peer Reviewed High Impact Journal ), 4(1.1), 355-366. https://pimrj.org/index.php/pimrj/article/view/299