A Learner Engagement Estimation and Support System Using PC Built-in Camera
Xianwen Zheng
Japan Advanced Institute of Science and Technology (JAIST), Japan
During the last decades, rapid advances in technology made online learning were well established in higher education. Furthermore, control of COVID-19 made face-to-face teaching impossible and forced schools to shift to an online teaching model. Nevertheless, the theoretical course content and less practice make the learners who take online courses seem unable to concentrate on the courses and, therefore, cannot maintain high learning efficiency. Besides, the lack of connection and in online learning made instructors hard to grasp the learner’s situation and whether the course content is suitable for the current level of learners. Thus, estimating learners’ engagement and giving corresponding technology support to both instructors and learners. However, there is limited research for learners’ engagement support systems [1,2,3] and no research combining affective engagement detection and behavioral engagement intervention. Therefore, the purpose of this research aims to analyze learners' engagement using recorded time-series expressions and body features to develop learners’ engagement support system for facilitating online learning and education quality.
From an online education point, we defined engagement in three facets. Affective engagement refers to the students being attracted to the course or task and enjoying it. Behavioral engagement refers to student’s participation in the classroom, and extra-curricular activities also relate to asking questions and contributing to class discussion. Cognitive engagement depends on affective and behavioral engagement and is directly related to learning goals. We introduce the research content from these three aspects.
1. Affective engagement estimation: Improving the performance of engagement estimation and analysis from the following steps. a) A new dataset was collected and labeled with an engagement level label for each video. The collected new dataset will be merged with the public engagement research dataset. b) Due to the eye features having a more substantial influence than others, we updated eye/eyebrow features for sequence deep learning models. c) To improve the pre-trained deep learning models, we rearrange the structure of LSTM/QRNN models.
2. Behavioral engagement intervention: Designing engagement feedback loops to support online learning from the following perspectives. a) Engagement intervention model will be proposed. We will survey engagement intervention models and gather terms of learning activities that effectively maintain engagement. b) An engagement support system will be developed. The system integrates the affective engagement estimation deep learning model and the proper learning activity intervention model.
3. Engagement assessment framework: Each engagement element depends on the others. In other words, Engagement assessment framework for each element is the basis for research. a) Affective engagement: Applying the proposed engagement analysis system to evaluate intervention effectiveness. b) Behavioral engagement: Analyzing and visualizing learning process with intervention and learning activity histories, including dropout rate. c) Cognitive engagement: Analyzing information from conducting an exam, questionnaires, and self-reports about motivation, performance, and learned skills.
In the affective engagement estimation part, we proposed the optimization structure network achieved the engagement estimation correct rate of 68.5% sequence deep learning models. The achieved correct rate is 10% higher than the baseline in the DAiSEE dataset. Moreover, to solve the insufficient data issue, we proposed transfer learning which pre-trained a deep learning model on DAISEE dataset and transferred the trained model to a new composing time-series dataset. The experiment result is 63.7% and 2% higher than the baseline in the new time-series dataset.
From this result, we found that eye information, like eye gaze, wink, and eye movement, is essential than before for our experiment in this stage. The body action and movement improved the performance of sequence deep learning models and brought some partially redundant information that interference/harmful estimation. We need to standardize time-series body features and refine the designed features to improve our proposed optimization structure models’ accuracy. In the future work, we will design and evaluate behavioral engagement intervention and address the impact of individual and cultural differences.
This work was supported by JSPS KAKENHI Grant Number 20H04294 and Photron limited.
[1] A. Sengupta & S. Williams (2021): Can an Engagement Platform Persuade Students to Stay? Applying Behavioral Models for Retention, International Journal of Human-Computer Interaction, DOI: 10.1080/10447318.2020.1861801.
[2] Schmid, A., Melzer, P., and Schoop, M. (2020), "Gamifying Electronic Negotiation Training - A Mixed Method Study of Students' Motivation, Engagement and Learning," In Proceedings of the 28th European Conference on Information Systems (ECIS), An Online AIS Conference, 2020. https://aisel.aisnet.org/ecis2020_rp/131.
[3] Y. Adachi, and A. Kashihara; A Partner Robot for Promoting Collaborative Reading, Proc. of the International Conference on Smart Learning Environments (ICSLE 2019), pp.15-24 (2019).