TRUST IN STUDENTS’ VOLUNTARY USE OF CHATGPT AND GENERATIVE AI IN HIGHER EDUCATION: A SYSTEMATIC LITERATURE REVIEW

Authors

DOI:

https://doi.org/10.35631/IJEPC.1162041

Keywords:

Academic Integrity, ChatGPT, Generative Artificial Intelligence, Higher Education Students, Techno-Trust, Technology Adoption, Trust In AI, Voluntary Use

Abstract

The rapid diffusion of generative artificial intelligence (GenAI), particularly tools such as ChatGPT, has intensified students’ voluntary reliance on systems that operate under epistemic uncertainty. While existing research on GenAI adoption predominantly emphasises behavioural intention and usage outcomes, the psychological role of trust in enabling reliance remains conceptually fragmented. This systematic literature review synthesises empirical evidence on trust and techno-trust in students’ voluntary use of GenAI within higher education contexts. Guided by PRISMA 2020, a systematic search of Scopus and Web of Science identified 11 empirical studies published between 2023 and 2025 for descriptive synthesis. The findings reveal substantial heterogeneity in how trust is defined, operationalised, and positioned within empirical models. Trust is predominantly conceptualised as a cognitively oriented evaluation of system reliability or output credibility. Across studies, it is variably positioned as an antecedent, mediator, moderator, outcome, or remains implicitly embedded within adoption constructs. Behavioural intention and use-related outcomes dominate the literature, whereas reliance, acceptance, and calibrated trust receive comparatively limited empirical attention. By consolidating fragmented evidence through an integrative typology and analytical mapping of trust positioning, this review clarifies the inconsistent analytical roles assigned to trust in voluntary GenAI use. The findings reconceptualise trust as a psychological reliance mechanism operating under epistemic risk rather than a peripheral adoption variable. Educationally responsible engagement with GenAI therefore depends not on maximising trust, but on cultivating calibrated and warranted reliance. This synthesis provides a clearer foundation for future research on trust calibration, epistemic judgement, and decision-making in the educational use of generative artificial intelligence.

 

Downloads

Download data is not yet available.

References

Abdalla, A. M. (2025). Students’ acceptance of generative artificial intelligence in higher education: Examining the role of trust. Education and Information Technologies, 30(2), 1–20.

Al Amin, M., Hossain, M. A., & Islam, M. S. (2025). Techno-trust as a mediator in students’ adoption of generative AI tools in higher education. Computers & Education: Artificial Intelligence, 6, 100201. https://doi.org/10.1016/j.caeai.2024.100201.

Al-Dmour, H., Alshurideh, M., & Al Kurdi, B. (2025). Determinants of artificial intelligence adoption in higher education: The role of technology trust. Education and Information Technologies, 30(1), 1–24.

Amaro, J., Costa, C., & Reis, L. P. (2024). Trust in generative artificial intelligence: Students’ perceptions of ChatGPT in academic contexts. IEEE Transactions on Learning Technologies, 17(1), 45–57. https://doi.org/10.1109/TLT.2023.3324567.

Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 60(6), 1–12. https://doi.org/10.1080/14703297.2023.2190148.

Dwivedi, Y. K., Hughes, D. L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., … Williams, M. D. (2023). So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International. Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642.

Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online shopping: An integrated model. MIS Quarterly, 27(1), 51–90. https://doi.org/10.2307/30036519.

Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057.

Guo, Y., & Erdenebold, T. (2025). Trust in AI output and students’ continuance intention to use generative AI in higher education. Computers & Education, 198, 104750.

Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570.

Jo, H. (2024). University students’ use of generative AI for learning: Perceptions of credibility and usage behaviour. Education Sciences, 14(2), 155. https://doi.org/10.3390/educsci14020155.

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274.

Kıyak, M., Wiese, E., & Hancock, P. A. (2025). Advice-taking from artificial intelligence: Behavioural indicators of trust in generative systems. Human–Computer Interaction, 40(1), 1–32.

Lankton, N. K., McKnight, D. H., & Tripp, J. F. (2015). Technology, humanness, and trust: Rethinking trust in technology. Journal of the Association for Information Systems, 16(10), 880–918. https://doi.org/10.17705/1jais.00411.

Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50.30392

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734. https://doi.org/10.5465/amr.1995.9508080335.

McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research, 13(3), 334–359. https://doi.org/10.1287/isre.13.3.334.81.

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71.

Rahman, M. M., Islam, M. A., & Rahman, M. S. (2023). Determinants of students’ intention to use ChatGPT in higher education: The role of trust. Education and Information Technologies, 28(5), 1–23.

Shahzad, F., Xiu, G., Wang, J., & Shahbaz, M. (2024). Exploring students’ adoption of generative AI tools: The mediating role of technology trust. Interactive Learning Environments, 32(4), 1–17.

Song, Y. (2025). Students’ trust in ChatGPT and its influence on acceptance in higher education. Computers & Education: Artificial Intelligence, 6, 100198. https://doi.org/10.1016/j.caeai.2024.100198.

Subhani, M. I., Hasan, S. M., & Mehmood, A. (2025). Factors influencing students’ adoption of generative AI tools: Evidence from higher education. Education and Information Technologies, 30(2), 1–2.

Downloads

Published

2026-03-11

How to Cite

Ramli, M. B., Hashim, S., Rozali, M. Z., & Abdullah, A. A. M. (2026). TRUST IN STUDENTS’ VOLUNTARY USE OF CHATGPT AND GENERATIVE AI IN HIGHER EDUCATION: A SYSTEMATIC LITERATURE REVIEW. INTERNATIONAL JOURNAL OF EDUCATION, PSYCHOLOGY AND COUNSELLING (IJEPC), 11(62), 680–701. https://doi.org/10.35631/IJEPC.1162041