Modeling of Adaptive Knowledge Testing: Efficiency Threshold, Task Complexity and Completion Time

Authors

  • O. F. Shevchuk Vinnytsia National Technical University
  • A. A. Yarovyi Vinnytsia National Technical University
  • Yu. M. Panochyshyn Vinnytsia National Technical University
  • S. I. Petryshyn Vinnytsia National Technical University
  • О. A. Kozlovskyi Vinnytsia National Technical University

DOI:

https://doi.org/10.31649/1997-9266-2025-178-1-104-112

Keywords:

modeling, adaptive testing, integral assessment, task completion time, task complexity

Abstract

A comprehensive informational and analytical analysis has been conducted to evaluate the feasibility of implementing adaptive computer-based knowledge testing for specific academic disciplines in educational institutions. The shortcomings of the traditional approach, which imposes fixed time constraints for test completion without considering the individual characteristics of learners and potentially causing negative reactions among test participants, have been identified. Alternatively, an integral assessment approach is proposed, accounting for both task complexity and task completion time. An adaptive algorithm has been developed based on the efficiency threshold q, which determines the adjustment of the difficulty level for subsequent tasks depending on the integral evaluation result of the previous task. Simulation modeling was carried out using Python to verify the effectiveness of the proposed approach. A test dataset comprising tasks of three complexity levels was created, with completion times modeled according to the normal distribution. The analysis revealed that significant differences in task difficulty levels necessitate establishing separate efficiency thresholds for each category of questions, while minor differences allow for a single threshold for all test tasks. Parameter tuning for the integral assessment was performed within the test dataset, and the effectiveness of the proposed method was examined. It was noted that the obtained coefficients of the integral evaluation could serve as baseline values during the initial implementation phase of the system, with further optimization based on model training results during pilot testing. The described methodology demonstrates flexibility and ease of implementation, enabling parameter customization and effective adaptation to both the individual characteristics of learners and the specific requirements of individual disciplines. Furthermore, recording task completion times can serve as an additional tool for assessing the quality of test items.

Author Biographies

O. F. Shevchuk, Vinnytsia National Technical University

Cand. Sc. (Phys.-Math.), Associate Professor, Associate Professor of the Chair of Computer Sciences

A. A. Yarovyi, Vinnytsia National Technical University

Dr. Sc. (Eng.), Professor, Head of the Chair of Computer Science

Yu. M. Panochyshyn, Vinnytsia National Technical University

 Cand. Sc. (Eng.), Associate Professor, Associate Professor of the Chair of Computer Science

S. I. Petryshyn, Vinnytsia National Technical University

Cand. Sc. (Eng.), Senior Lecturer of the Chair of Computer Science

О. A. Kozlovskyi, Vinnytsia National Technical University

Student of the Department of Intelligent Information Technologies and Automation

References

A. Frey, T. Liu, A. Fink, and C. König, “Meta-Analysis of the Effects of Computerized Adaptive Testing on the

Motivation and Emotion of Examinees,” Eur. Journal Psychol. Assess., vol. 40, no. 5, pp. 427-443, 2024. https://doi.org/10.1027/1015-5759/a000821 .

N. Sherkuziyeva, et al., “The comparative effect of computerized dynamic assessment and rater mediated assessment on EFL learners’ oral proficiency, writing performance, and test anxiety,” Lang. Test Asia, vol. 13, no. 15, 2023, https://doi.org/10.1186/s40468-023-00227-3 .

P. Gawliczek, V. Krykun, N. Tarasenko, M. Tyshchenko, and O. Shapran, “Computer Adaptive Language Testing According to NATO STANAG 6001 Requirements,” Adv. Educ., no. 8 (17), pp. 19-26, 2021, https://doi.org/10.20535/2410-8286.225018 .

В. В. Камінський, В. А. Мізюкі, Р. Д. Турчанінов, «Аналіз ефективності штучного інтелекту в адаптивних навчаль-них платформах для індивідуалізації освітнього процесу,» ZENODO, Груд, 2024. [Electronic resource]. Available: https://doi.org/10.5281/zenodo.14562152 .

A. Trifa, A. Hedhili, and W. L. Chaari, “Knowledge tracing with an intelligent agent, in an e-learning platform,” Educ. Inf. Technol., vol. 24, no. 1, pp. 711-741, 2019. https://doi.org/10.1007/s10639-018-9792-5 .

Я. Б. Сікора, Методичні рекомендації до розробки та використання адаптивних тестових завдань, Житомир: вид-во ЖДУ ім. Івана Франка, 2024. [Online]. Available: http://eprints.zu.edu.ua/41797/1/metod_test.pdf .

О. Радкевич, «Адаптивне тестування в контексті використання електронних засобів навчання. Суть, розроблення та оцінювання,» Професійна педагогіка, т. 1, № 26, c. 58-73, 2023. https://doi.org/10.32835/2707-3092.2023.26.58-73 .

С. Загребельний, «Використання комп’ютерного адаптивного тестування у ДДМА на платформі Moodle,» Техн. Електр. Навч., no. 3, Лист., 2019.

Н. Васюкова, В. Крикун, Ю. Грищук, і А. Кравчук, «Експериментальна перевірка результативності методики комп’ютерного адаптивного мовного тестування відповідно до вимог НАТО STANAG 6001,» Людинознавчі студії. Серія «Педагогіка», № 19 (51), с. 9-17, 2024. https://doi.org/10.24919/2413-2039.19/51.1 .

W. J. van der Linden, Linear models for optimal test design, Springer, 2005. https://doi.org/10.1007/0-387-29054-0 .

С. Л. Загребельний, М. В. Брус, “Адаптивне тестування як один із способів перевірки знань студентів у технічному вузі,” Науковий Вісник ДДМА, vol. 1, no. 22Е, pp. 155-162, 2017.

E. H. Am, I. Hidayah, and S. S. Kusumawardani, “A Literature Review of Knowledge Tracing for Student Modeling: Re-search Trends, Models, Datasets, and Challenges,” Journal. Inf. Technol. Comput. Sci., vol. 6, no. 2, 2021, https://doi.org/10.25126/jitecs.202162344 .

S. Liu, R. Zou, J. Sun, K. Zhang, L. Jiang, and D. Zhou, “A Hierarchical Memory Network for Knowledge Tracing,” Ex-pert Syst. Appl., vol. 177, p. 114935, 2021. https://doi.org/10.1016/j.eswa.2021.114935 .

O. Bulut, J. Shin, S. N. Yildirim-Erbasli, G. Gorgun, and Z. A. Pardos, “An Introduction to Bayesian Knowledge Tracing with pyBKT,” Psych, vol. 5, no. 3, pp. 770-786, 2023. https://doi.org/10.3390/psych5030050 .

F. Liu, X. Hu, C. Bu, and K. Yu, “Fuzzy Bayesian Knowledge Tracing,” IEEE Trans. Fuzzy Syst., vol. 6706, pp. 1-15, 2021. https://doi.org/10.1109/TFUZZ.2021.3083177 .

W. J. van der Linden, “Modeling response times with latent variables: Principles and applications,” Psychol. Test Assess. Model., vol. 53, no. 3, pp. 334-358, 2011.

W. J. van der Linden and X. Xiong, “Speededness and Adaptive Testing,” J. Educ. Behav. Stat., vol. 38, pp. 418-438, 2013.

W. J. van der Linden, D. J. Scrams, and D. L. Schnipke, “Using Response-Time Constraints to Control for Differential Speededness in Computerized Adaptive Testing,” Appl. Psychol. Meas., vol. 23, no. 3, pp. 195-210, 1999. https://doi.org/10.1177/01466219922031329 .

H. Sie, M. D. Finkelman, B. Riley, and N. Smits, “Utilizing Response Times in Computerized Classification Test-ing,” Appl. Psychol. Meas., vol. 39, no. 5, pp. 389-405, 2015. https://doi.org/10.1177/0146621615569504.

B. Becker, P. van Rijn, D. Molenaar, and D. Debeer, “Item order and speededness: implications for test fairness in higher educational high-stakes testing,” Assess. Eval. High. Educ., vol. 47, no. 7, pp. 1030-1042, 2021. https://doi.org/10.1080/02602938.2021.1991273 .

R. H. Klein Entink, G. J. A. Fox, and W. J. van der Linden, “A Box-Cox normal model for response times,” Br. J. Math. Stat. Psychol., vol. 62, no. 2, pp. 621-640, 2009. https://doi.org/10.1348/000711008X354742 .

Downloads

Abstract views: 94

Published

2025-02-27

How to Cite

[1]
O. F. Shevchuk, A. A. Yarovyi, Y. M. Panochyshyn, . S. I. . Petryshyn, and Kozlovskyi О. A., “Modeling of Adaptive Knowledge Testing: Efficiency Threshold, Task Complexity and Completion Time”, Вісник ВПІ, no. 1, pp. 104–112, Feb. 2025.

Issue

Section

Information technologies and computer sciences

Metrics

Downloads

Download data is not yet available.

Most read articles by the same author(s)