مجلة الجامعة الإسلامية للعلوم التطبيقية

 Balancing Personalization and Transparency in User-Centered AI Systems Through Explainable Deep Learning Interfaces

Ahmed Alshehri

الكلمات مفتاحية: UX, XAI; Personalization; User-Centered AI; Deep Learning; Transparency.

التخصص العام: Engineering

التخصص الدقيق: Numerical Methods & Computational Intelligence

https://doi.org/10.63070/jesc.2025.026; Received 31 August 2025; Revised 01 October 2025; Accepted 08 October 2025; Available online 12 October 2025.
DownloadPDF
الملخص

As AI systems become more advanced and personalized for user experiences in multiple contexts, such as e-learning, finance, and healthcare, the need and necessity for transparency become even greater. While deep learning models can provide the highest quality and recommendations, their black-box nature can inhibit user understanding, trust, and control. In this work, we explore the balance between personalization and transparency in user-centered AI systems by including explainable AI (XAI) techniques in deep learning algorithm-based recommender systems. Our study provides a system with a hybrid architecture that models user behavior embeddings, LSTM/CNN layers, and attention-based mechanisms. Explanations were provided for users through SHAP values, attention-based visual cues, and natural language text that helped users interpret their recommendations in real time. The interface with visual overlays and user interactive panels were designed for the user as a function of cognitive load and types of explanation. The proposed system was tested through two phases of user studies, both with quantitative performance metrics and qualitative data. The results indicated better recommendation accuracy, trust, perceived fairness, and user satisfaction when users received explanations. This work indicates how we can build ethical and usable AI systems. We show that by employing explainable interfaces we can not only enhance the effectiveness of personalized technology, but also increase human-level acceptability.

مراجع

[1] K. S. Kaswan, J. S. Dhatterwal, and R. P. Ojha, “AI in personalized learning,” in Advances in Technological Innovations in Higher Education, CRC Press, 2024, pp. 103–117.

[2] S. Patel, R. Patel, R. Sharma, and D. Patel, “Enhancing user engagement through AI-powered predictive content recommendations using collaborative filtering and deep learning algorithms,” Int. J. AI ML Innovations, vol. 12, no. 3, 2023.

[3] A. Da’u and N. Salim, “Recommendation system based on deep learning methods: a systematic review and new directions,” Artif. Intell. Rev., vol. 53, no. 4, pp. 2709–2748, 2020.

[4] D. A. Norman, “Design principles for human-computer interfaces,” in Proc. SIGCHI Conf. Human Factors Comput. Syst., 1983, pp. 1–10.

[5] J. D. Lee and K. A. See, “Trust in automation: Designing for appropriate reliance,” Human Factors, vol. 46, no. 1, pp. 50–80, 2004.

[6] O. Chen, F. Paas, and J. Sweller, “A cognitive load theory approach to defining and measuring task complexity through element interactivity,” Educ. Psychol. Rev., vol. 35, no. 2, p. 63, 2023.

[7] Y. Nohara, K. Matsumoto, H. Soejima, and N. Nakashima, “Explanation of machine learning models using shapley additive explanation and application for real data in hospital,” Comput. Methods Programs Biomed., vol. 214, p. 106584, 2022.

[8] D. Gm, R. H. Goudar, A. A. Kulkarni, V. N. Rathod, and G. S. Hukkeri, “A digital recommendation system for personalized learning to enhance online education: A review,” IEEE Access, vol. 12, pp. 34019–34041, 2024.

[9] G. Gupta and R. Katarya, “Research on understanding the effect of deep learning on user preferences,” Arab. J. Sci. Eng., vol. 46, no. 4, pp. 3247–3286, 2021.

[10] T. Miller, I. Durlik, A. ?obodzi?ska, L. Dorobczy?ski, and R. Jasionowski, “AI in context: harnessing domain knowledge for smarter machine learning,” Appl. Sci., vol. 14, no. 24, p. 11612, 2024.

[11] R. Dwivedi et al., “Explainable AI (XAI): Core ideas, techniques, and solutions,” ACM Comput. Surv., vol. 55, no. 9, pp. 1–33, 2023.

[12] Y. Rong et al., “Towards human-centered explainable AI: A survey of user studies for model explanations,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 46, no. 4, pp. 2104–2122, 2023.

[13] L. Van Velsen, G. Ludden, and C. Grünloh, “The limitations of user- and human-centered design in an eHealth context and how to move beyond them,” J. Med. Internet Res., vol. 24, no. 10, p. e37341, 2022.

[14] M. Mylrea and N. Robinson, “AI trust framework and maturity model: Improving security, ethics and trust in AI,” Cybersecurity Innov. Technol. J., vol. 1, no. 1, pp. 1–15, 2023.

[15] M. Shajalal, A. Boden, and G. Stevens, “Towards user-centered explainable energy demand forecasting systems,” in Proc. 13th ACM Int. Conf. Future Energy Syst., 2022, pp. 446–447.

[16] V. Swamy, “A human-centric approach to explainable AI for personalized education,” arXiv preprint, arXiv:2505.22541, 2025.

[17] S. Bharati, M. R. H. Mondal, and P. Podder, “A review on explainable artificial intelligence for healthcare: Why, how, and when?,” IEEE Trans. Artif. Intell., vol. 5, no. 4, pp. 1429–1442, 2023.

[18] S. Maleki Varnosfaderani and M. Forouzanfar, “The role of AI in hospitals and clinics: transforming healthcare in the 21st century,” Bioengineering, vol. 11, no. 4, p. 337, 2024.

[19] H. Khosravi et al., “Explainable artificial intelligence in education,” Comput. Educ.: Artif. Intell., vol. 3, p. 100074, 2022.

[20] R. Oruche, R. Akula, S. K. Goruganthu, and P. Calyam, “Holistic multi-layered system design for human-centered dialog systems,” in Proc. IEEE 4th Int. Conf. Human-Machine Syst. (ICHMS), 2024, pp. 1–8.

[21] V. Hassija et al., “Interpreting black-box models: a review on explainable artificial intelligence,” Cogn. Comput., vol. 16, no. 1, pp. 45–74, 2024.

[22] W. J. Von Eschenbach, “Transparency and the black box problem: Why we do not trust AI,” Philos. Technol., vol. 34, no. 4, pp. 1607–1622, 2021.

[23] P. Nama, “AI-powered mobile applications: Revolutionizing user interaction through intelligent features and context-aware services,” J. Emerg. Technol. Innov. Res., vol. 10, no. 1, p. g611-g620, 2023.

[24] T. Wischmeyer, “Artificial intelligence and transparency: opening the black box,” in Regulating Artificial Intelligence, Cham: Springer Int. Publ., 2019, pp. 75–101.

[25] Interaction Design Foundation (IxDF), “What is Human-Centered AI (HCAI)?,” Interaction Design Foundation, Aug. 5, 2025. [Online]. Available: https://www.interaction-design.org/literature/topics/human-centered-ai

[26] A. Lombardi, S. Marzo, T. Di Noia, E. Di Sciascio, and C. Ardito, “Exploring the usability and trustworthiness of AI-driven user interfaces for neurological diagnosis,” in Adjunct Proc. 32nd ACM Conf. User Modeling, Adaptation and Personalization, 2024, pp. 627–634.