WHEN HIGH SCORES DO NOT MEAN ETHICAL UNDERSTANDING: REVISITING AI ETHICS MEASUREMENT AMONG MATRICULATION STUDENTS

Authors

DOI:

https://doi.org/10.35631/IJMOE.829056

Keywords:

AI Ethics, AI Literacy Measurement, Self-Report Bias, Situational Judgement Test, Matriculation Students

Abstract

The rapid integration of Artificial Intelligence AI into educational contexts has led to the development of numerous AI literacy instruments aimed at assessing students’ ethical awareness and responsibility. This focus is particularly salient in matriculation colleges, which function as a critical transitional stage between secondary schooling and university education, where students begin to encounter heightened academic autonomy and ethically consequential uses of generative AI. However, most existing measures rely on self-report formats, raising concerns about the validity of inferences drawn from high ethics scores. Thus, this study examines whether self-reported AI ethics scores meaningfully reflect students’ ethical understanding and practices in authentic academic contexts. A convergent mixed-methods design was employed, combining a quantitative survey of 355 matriculation students using the ethics-related construct of the Meta AI Literacy Scale (MAILS) with qualitative semi-structured interviews involving three experienced lecturers. Quantitative results indicated statistically high mean score across all ethics subdimensions. In contrast, qualitative findings revealed systematic discrepencies between perceived and enacted ethical practices, including verbatim use of AI-generated content, limited recognition of ethical boundaries, and recurring academic integrity concerns. These findings suggest that self-report measures may overestimate ethical competence and inadequately capture ethical reasoning as enacted in situationally complex learning environments. From a measurement perspective, the results highlight a risk of construct under-representation when ethical AI use is assessed solely through questionnaires. This study contributes to AI literacy research by examining the construct validity of ethics-related self-report scores and demonstrating the value of behaviourally grounded mixed-methods approaches for strengthening AI ethics measurement.

 

Downloads

Download data is not yet available.

References

Alkharusi, H. (2022). A descriptive analysis and interpretation of data from Likert scales in educational and psychological research. Indian Journal of Psychology and Education, 12(2), 13–16.

Alnsour, M. M., Qouzah, L., Aljamani, S., Alamoush, R. A., & AL-Omiri, M. K. (2025). AI in education: enhancing learning potential and addressing ethical considerations among academic staff—a cross-sectional study at the University of Jordan. International Journal for Educational Integrity, 21(1). https://doi.org/10.1007/s40979-025-00189-4

Asamoah, M. K., & Amarteifio, J. (2025). Exploring the role and ethical concerns of generative AI in doctoral education at the University of Ghana. SN Social Sciences, 5(9), 1–27. https://doi.org/10.1007/s43545-025-01184-9

Auwal, A. M. (2025). Autonomy versus algorithm: a replication study of student perspectives on AI ethical boundaries. International Journal of Educational Technology in Higher Education, 22(1). https://doi.org/10.1186/s41239-025-00570-w

Culture, A. I., & Humphreys, D. (2025). AI ’ s Epistemic Harm : Reinforcement Learning , Collective. 1–27.

Floridi, L., & Cowls, J. (2022). A unified framework of five principles for AI in society. Machine Learning and the City: Applications in Architecture and Urban Design, 1, 535–545. https://doi.org/10.1002/9781119815075.ch45

Hasib, M., & Islam, M. S. (2025). How University students in Bangladesh engage with ChatGPT: A qualitative study. Plos One, 20(9 September), 1–18. https://doi.org/10.1371/journal.pone.0333089

Ka, C., & Chan, Y. (2025). In understandings of academic misconduct. Education and Information Technologies, 30, 8087–8108.

Khan, S., Mazhar, T., Shahzad, T., Khan, M. A., Rehman, A. U., Saeed, M. M., & Hamam, H. (2025). Harnessing AI for sustainable higher education: ethical considerations, operational efficiency, and future directions. Discover Sustainability, 6(1). https://doi.org/10.1007/s43621-025-00809-6

Laine, J., Minkkinen, M., & Mäntymäki, M. (2025). Understanding the Ethics of Generative AI: Established and New Ethical Principles. Communications of the Association for Information Systems, 56, 1–25. https://doi.org/10.17705/1cais.05601

Leaton Gray, S., Edsall, D., & Parapadakis, D. (2025). AI-Based Digital Cheating At University, and the Case for New Ethical Pedagogies. Journal of Academic Ethics, 23(4), 2069–2086. https://doi.org/10.1007/s10805-025-09642-y

Long, D., & Magerko, B. (2020). What is AI Literacy? Competencies and Design Considerations. Conference on Human Factors in Computing Systems - Proceedings, 1–16. https://doi.org/10.1145/3313831.3376727

Messick, S. (1995). Standards of Validity and the Validity of Standards in Performance Asessment. Educational Measurement: Issues and Practice, 14(4), 5–8. https://doi.org/10.1111/j.1745-3992.1995.tb00881.x

Miller, A. L., & Research, A. for I. (2011). Investigating Social Desirability Bias in Student Self-Report Surveys. Association for Institutional Research. http://libdata.lib.ua.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=ED531729&site=eds-live&scope=site

Mujtaba, B. G. (2025). Human-AI Intersection: Understanding the Ethical Challenges, Opportunities, and Governance Protocols for a Changing Data-Driven Digital World. Business Ethics and Leadership, 9(1), 109–126. https://doi.org/10.61093/bel.9(1).109-126.2025

Mustapha, R., Ibrahim, N., Ayub, M. N., Jaafar, A. B., Jusoh, M. K. A. Bin, & Mahmud, M. (2025). From Detection to Prevention: A Human-Centered Design (HCD) Approach to Mitigating AI Misuse in Learning. International Journal of Basic and Applied Sciences, 14(4), 159–167. https://doi.org/10.14419/3t4ghz09

Nelson, A. S., Santamaría, P. V., Javens, J. S., & Ricaurte, M. (2025). Students’ Perceptions of Generative Artificial Intelligence (GenAI) Use in Academic Writing in English as a Foreign Language †. Education Sciences, 15(5). https://doi.org/10.3390/educsci15050611

Ortiz-bonnin, S., & Blahopoulou, J. (2025). ChatGPT usage in higher education students. Social Psychology of Education, 1–21.

Patterson, F., Zibarras, L., & Ashworth, V. (2016). Situational judgement tests in medical education and training: Research, theory and practice: AMEE Guide No. 100. Medical Teacher, 38(1), 3–17. https://doi.org/10.3109/0142159X.2015.1072619

Paulhus, D. L. (2018). Encyclopedia of Personality and Individual Differences. Encyclopedia of Personality and Individual Differences, 1–5. https://doi.org/10.1007/978-3-319-28099-8

Pudasaini, S., Miralles-Pechuán, L., Lillis, D., & Llorens Salvador, M. (2025). Survey on AI-Generated Plagiarism Detection: The Impact of Large Language Models on Academic Integrity. Journal of Academic Ethics, 23(3), 1137–1170. https://doi.org/10.1007/s10805-024-09576-x

Schiff, D. (2022). Education for AI, not AI for Education: The Role of Education and Ethics in National AI Policy Strategies. International Journal of Artificial Intelligence in Education, 32(3), 527–563. https://doi.org/10.1007/s40593-021-00270-2

Sharma, R. C., & Panja, S. K. (2025). Addressing Academic Dishonesty in Higher Education: A Systematic Review of Generative AI’s Impact. Open Praxis, 17(2), 251–269. https://doi.org/10.55982/openpraxis.17.2.820

Shaw, D. (2025). The digital erosion of intellectual integrity: why misuse of generative AI is worse than plagiarism. AI and Society, 2–4. https://doi.org/10.1007/s00146-025-02362-2

Theoharakis, V., Mylonopoulos, N., & Papadopoulou, K. (2025). AI’s learning paradox: how business student’ engagement with AI amplifies moral disengagement-driven misconduct. Studies in Higher Education, 5079, 1–18. https://doi.org/10.1080/03075079.2025.2533365

Țîru, L. G., Gherheș, V., Stoicov, I., & Stanici, M. (2025). Not Ready for AI? Exploring Teachers’ Negative Attitudes Toward Artificial Intelligence. Societies, 15(12), 1–19. https://doi.org/10.3390/soc15120337

Unesco. (2024). Readiness Assessment Methodology. Unesco. https://www.unesco.org/ethics-ai/en/ram

van de Mortel, T. F. (2008). Faking it: Social desirability response bias in selfreport research. Australian Journal of Advanced Nursing, 25(4), 40–48.

Yan, Y., Liu, H., & Chau, T. (2025). A Systematic Review of AI Ethics in Education: Challenges, Policy Gaps, and Future Directions. Journal of Global Information Management, 33(1), 1–50. https://doi.org/10.4018/JGIM.386381

Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (3rd ed.). SAGE.

Fetters, M. D., Curry, L. A., & Creswell, J. W. (2013). Achieving integration in mixed methods designs: Principles and practices. Health Services Research, 48(6), 2134–2156.

Kane, M. T. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50(1), 1–73.

Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50(9), 741–749.

Patterson, F., Zibarras, L., & Ashworth, V. (2016). Situational judgement tests in medical education and training: Research, theory and practice. AMEE Guide No. 100. Medical Teacher, 38(1), 3–17.

Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879–903.

Downloads

Published

26-03-2026

How to Cite

Harun, A., & Khairani, A. Z. (2026). WHEN HIGH SCORES DO NOT MEAN ETHICAL UNDERSTANDING: REVISITING AI ETHICS MEASUREMENT AMONG MATRICULATION STUDENTS. INTERNATIONAL JOURNAL OF MODERN EDUCATION (IJMOE), 8(29), 945–960. https://doi.org/10.35631/IJMOE.829056