Abstract
Over-reliance on generative artificial intelligence (AI) for writing tasks can have negative effects on engineering education. Despite growing concerns over generative AI’s impact on authentic writing skills, limited research has critically examined alternatives that integrate between self-regulated learning (SRL) theories with knowledge visualization tools to foster monitoring and evaluation processes in engineering education. This exploratory study addresses this gap by exploring how machine learning-based text analytics can scaffold SRL, extending prior frameworks on information processing. Thirty participants were recruited from two sections of an engineering technology course. As a course task, participants wrote essays using the knowledge visualization system over a semester. SRL skills were measured through a survey, and final course grades served as a measurement of learning performance (LP). First, there were no noticeable relationships between students’ SRL, LP, and writing performance (WP). Second, regression analysis showed that SRL and course grades do not significantly predict WP. Third, engineering students’ WP significantly increased over time. Lastly, there were no differences in WP changes over time between high and low SRL or LP groups. However, there is a main effect of LP on WP, and an interaction effect of LP and time was observed in the evaluating component.
License
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Article Type: Research Article
Journal of Digital Educational Technology, Volume 6, Issue 1, April 2026, Article No: ep2606
https://doi.org/10.30935/jdet/17551
Publication date: 15 Dec 2025
Article Views: 739
Article Downloads: 379
Open Access References How to cite this article
Full Text (PDF)