sensors-logo

Journal Browser

Journal Browser

New Trends on Multimodal Learning Analytics: Using Sensors to Understand and Improve Learning

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 March 2021) | Viewed by 32248

Special Issue Editors


E-Mail Website
Guest Editor
Escuela de Ingeniería Informática, Universidad de Valparaíso, Chile
Interests: learning analytics; accessibility; technology-enhanced learning

E-Mail Website
Guest Editor
Centro de Tecnologias, Ciências e Saúde (CTS), Universidade Federal de Santa Catarina, Brazil
Interests: learning analytics; educational technologies; educational data mining; distance learning

E-Mail Website
Guest Editor
University College London, United Kingdom
Interests: educational technologies; AI in Education; human–AI Interaction; learning analytics

Special Issue Information

Dear Colleagues,

Educational environments are transforming with digital technologies. In the learning environments, the magistral class has been gradually abandoned, and the learners are changing from observers to the protagonists of their own learning. In this way, situations in which the learner produces unique solutions, interacts in groups, or must expose their ideas to their peers are challenging to assess and generate appropriate feedback [1]. Under that premise, the incorporation of sensors that allow capturing information of the transformations that occur inside educational settings together with them is essential for the continuous enhancement of educational processes.

Multimodal learning analytics (MMLA) is a subfield of learning analytics that deals with data collected and integrated from different sources, allowing a more panoramic understanding of the learning processes and the different dimensions related to learning [2]. MMLA allows the observation of interactions and nuances that are normally overlooked by traditional learning analytics methods, given that the latter frequently exclusively rely on computer-based data [3]. In this direction, introducing low-cost sensors allows access to information from learners’ interactions with each other and with their surroundings in physical space, which could not be possible with traditional log data only. A wide range of sensors have been used by MMLA experiments, ranging from those collecting students motoric (body, head) and physiological (heart, brain, skin) behavior, to those capturing social (proximity), situational, and environmental (location, noise) contexts in which learners are placed [4]. 

This Special Issue focuses on all kinds of sensors used for collecting data and conducting MMLA studies, as well as on the impacts of learning achieved through the use of those sensors.

The topics of interest include but are not limited to:

  • Wearable trackers;
  • Multimodal classroom analytics;
  • Real-time multimodal data collection;
  • Feedback from multimodal data provided by (and through) sensors;
  • Data collection, analysis methods, and frameworks for MMLA;
  • All kinds of learning experimentations based on multimodal data (collaboration, mobility/location, body postures, gestures, etc.) in different contexts (oral presentation, problem solving, lectures, etc.);
  • Multimodal data representation and visualization;
  • Challenges and limitations on processing and synchronizing data from multiple sources.

Prof. Dr. Roberto Muñoz
Prof. Dr. Cristian Cechinel
Dr. Mutlu Cukurova
Guest Editors

[1] Blikstein, P. Multimodal learning analytics. In Proceedings of the third international conference on learning analytics and knowledge, Leuven, Belgium, 8-12 April 2013, pp. 102-106.

[2] Blikstein, P.; Worsley, M. Multimodal Learning Analytics and Education Data Mining: Using Computational Technologies to Measure Complex Learning Tasks. J. Learn. Anal. 2016, 3, 220–238.

[3] Ochoa, X.; Lang, A.C.; Siemens, G. Multimodal learning analytics. In The Handbook of Learning Analytics. Society for Learning Analytics Research, 2017; pp. 129-141.

[4] Di Mitri, D.; Schneider, J.; Specht, M.; Drachsler, H. From signals to knowledge: A conceptual model for multimodal learning analytics. J. Comput. Assist. Learn. 2018, 34, 338–349.

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

24 pages, 2416 KiB  
Article
A Learning Analytics Framework to Analyze Corporal Postures in Students Presentations
by Felipe Vieira, Cristian Cechinel, Vinicius Ramos, Fabián Riquelme, Rene Noel, Rodolfo Villarroel, Hector Cornide-Reyes and Roberto Munoz
Sensors 2021, 21(4), 1525; https://doi.org/10.3390/s21041525 - 22 Feb 2021
Cited by 8 | Viewed by 3545
Abstract
Communicating in social and public environments are considered professional skills that can strongly influence career development. Therefore, it is important to proper train and evaluate students in this kind of abilities so that they can better interact in their professional relationships, during the [...] Read more.
Communicating in social and public environments are considered professional skills that can strongly influence career development. Therefore, it is important to proper train and evaluate students in this kind of abilities so that they can better interact in their professional relationships, during the resolution of problems, negotiations and conflict management. This is a complex problem as it involves corporal analysis and the assessment of aspects that until recently were almost impossible to quantitatively measure. Nowadays, a number of new technologies and sensors have being developed for the capture of different kinds of contextual and personal information, but these technologies were not yet fully integrated inside learning settings. In this context, this paper presents a framework to facilitate the analysis and detection of patterns of students in oral presentations. Four steps are proposed for the given framework: Data collection, Statistical Analysis, Clustering, and Sequential Pattern Mining. Data Collection step is responsible for the collection of students interactions during presentations and the arrangement of data for further analysis. Statistical Analysis provides a general understanding of the data collected by showing the differences and similarities of the presentations along the semester. The Clustering stage segments students into groups according to well-defined attributes helping to observe different corporal patterns of the students. Finally, Sequential Pattern Mining step complements the previous stages allowing the identification of sequential patterns of postures in the different groups. The framework was tested in a case study with data collected from 222 freshman students of Computer Engineering (CE) course at three different times during two different years. The analysis made it possible to segment the presenters into three distinct groups according to their corporal postures. The statistical analysis helped to assess how the postures of the students evolved throughout each year. The sequential pattern mining provided a complementary perspective for data evaluation and helped to observe the most frequent postural sequences of the students. Results show the framework could be used as a guidance to provide students automated feedback throughout their presentations and can serve as background information for future comparisons of students presentations from different undergraduate courses. Full article
Show Figures

Figure 1

27 pages, 1861 KiB  
Article
A Multimodal Real-Time Feedback Platform Based on Spoken Interactions for Remote Active Learning Support
by Hector Cornide-Reyes, Fabián Riquelme, Diego Monsalves, Rene Noel, Cristian Cechinel, Rodolfo Villarroel, Francisco Ponce and Roberto Munoz
Sensors 2020, 20(21), 6337; https://doi.org/10.3390/s20216337 - 06 Nov 2020
Cited by 10 | Viewed by 3617
Abstract
While technology has helped improve process efficiency in several domains, it still has an outstanding debt to education. In this article, we introduce NAIRA, a Multimodal Learning Analytics platform that provides Real-Time Feedback to foster collaborative learning activities’ efficiency. NAIRA provides real-time visualizations [...] Read more.
While technology has helped improve process efficiency in several domains, it still has an outstanding debt to education. In this article, we introduce NAIRA, a Multimodal Learning Analytics platform that provides Real-Time Feedback to foster collaborative learning activities’ efficiency. NAIRA provides real-time visualizations for students’ verbal interactions when working in groups, allowing teachers to perform precise interventions to ensure learning activities’ correct execution. We present a case study with 24 undergraduate subjects performing a remote collaborative learning activity based on the Jigsaw learning technique within the COVID-19 pandemic context. The main goals of the study are (1) to qualitatively describe how the teacher used NAIRA’s visualizations to perform interventions and (2) to identify quantitative differences in the number and time between students’ spoken interactions among two different stages of the activity, one of them supported by NAIRA’s visualizations. The case study showed that NAIRA allowed the teacher to monitor and facilitate the learning activity’s supervised stage execution, even in a remote learning context, with students working in separate virtual classrooms with their video cameras off. The quantitative comparison of spoken interactions suggests the existence of differences in the distribution between the monitored and unmonitored stages of the activity, with a more homogeneous speaking time distribution in the NAIRA supported stage. Full article
Show Figures

Figure 1

20 pages, 8765 KiB  
Article
Mind Wandering in a Multimodal Reading Setting: Behavior Analysis & Automatic Detection Using Eye-Tracking and an EDA Sensor
by Iuliia Brishtel, Anam Ahmad Khan, Thomas Schmidt, Tilman Dingler, Shoya Ishimaru and Andreas Dengel
Sensors 2020, 20(9), 2546; https://doi.org/10.3390/s20092546 - 29 Apr 2020
Cited by 24 | Viewed by 6463
Abstract
Mind wandering is a drift of attention away from the physical world and towards our thoughts and concerns. Mind wandering affects our cognitive state in ways that can foster creativity but hinder productivity. In the context of learning, mind wandering is primarily associated [...] Read more.
Mind wandering is a drift of attention away from the physical world and towards our thoughts and concerns. Mind wandering affects our cognitive state in ways that can foster creativity but hinder productivity. In the context of learning, mind wandering is primarily associated with lower performance. This study has two goals. First, we investigate the effects of text semantics and music on the frequency and type of mind wandering. Second, using eye-tracking and electrodermal features, we propose a novel technique for automatic, user-independent detection of mind wandering. We find that mind wandering was most frequent in texts for which readers had high expertise and that were combined with sad music. Furthermore, a significant increase in task-related thoughts was observed for texts for which readers had little prior knowledge. A Random Forest classification model yielded an F 1 -Score of 0.78 when using only electrodermal features to detect mind wandering, of 0.80 when using only eye-movement features, and of 0.83 when using both. Our findings pave the way for building applications which automatically detect events of mind wandering during reading. Full article
Show Figures

Figure 1

27 pages, 5648 KiB  
Article
Utilizing Interactive Surfaces to Enhance Learning, Collaboration and Engagement: Insights from Learners’ Gaze and Speech
by Kshitij Sharma, Ioannis Leftheriotis and Michail Giannakos
Sensors 2020, 20(7), 1964; https://doi.org/10.3390/s20071964 - 31 Mar 2020
Cited by 16 | Viewed by 3489
Abstract
Interactive displays are becoming increasingly popular in informal learning environments as an educational technology for improving students’ learning and enhancing their engagement. Interactive displays have the potential to reinforce and maintain collaboration and rich-interaction with the content in a natural and engaging manner. [...] Read more.
Interactive displays are becoming increasingly popular in informal learning environments as an educational technology for improving students’ learning and enhancing their engagement. Interactive displays have the potential to reinforce and maintain collaboration and rich-interaction with the content in a natural and engaging manner. Despite the increased prevalence of interactive displays for learning, there is limited knowledge about how students collaborate in informal settings and how their collaboration around the interactive surfaces influences their learning and engagement. We present a dual eye-tracking study, involving 36 participants, a two-staged within-group experiment was conducted following single-group time series design, involving repeated measurement of participants’ gaze, voice, game-logs and learning gain tests. Various correlation, regression and covariance analyses employed to investigate students’ collaboration, engagement and learning gains during the activity. The results show that collaboratively, pairs who have high gaze similarity have high learning outcomes. Individually, participants spending high proportions of time in acquiring the complementary information from images and textual parts of the learning material attain high learning outcomes. Moreover, the results show that the speech could be an interesting covariate while analyzing the relation between the gaze variables and the learning gains (and task-based performance). We also show that the gaze is an effective proxy to cognitive mechanisms underlying collaboration not only in formal settings but also in informal learning scenarios. Full article
Show Figures

Figure 1

15 pages, 1831 KiB  
Article
Predicting Spatial Visualization Problems’ Difficulty Level from Eye-Tracking Data
by Xiang Li, Rabih Younes, Diana Bairaktarova and Qi Guo
Sensors 2020, 20(7), 1949; https://doi.org/10.3390/s20071949 - 31 Mar 2020
Cited by 4 | Viewed by 3193
Abstract
The difficulty level of learning tasks is a concern that often needs to be considered in the teaching process. Teachers usually dynamically adjust the difficulty of exercises according to the prior knowledge and abilities of students to achieve better teaching results. In e-learning, [...] Read more.
The difficulty level of learning tasks is a concern that often needs to be considered in the teaching process. Teachers usually dynamically adjust the difficulty of exercises according to the prior knowledge and abilities of students to achieve better teaching results. In e-learning, because there is no teacher involvement, it often happens that the difficulty of the tasks is beyond the ability of the students. In attempts to solve this problem, several researchers investigated the problem-solving process by using eye-tracking data. However, although most e-learning exercises use the form of filling in blanks and choosing questions, in previous works, research focused on building cognitive models from eye-tracking data collected from flexible problem forms, which may lead to impractical results. In this paper, we build models to predict the difficulty level of spatial visualization problems from eye-tracking data collected from multiple-choice questions. We use eye-tracking and machine learning to investigate (1) the difference of eye movement among questions from different difficulty levels and (2) the possibility of predicting the difficulty level of problems from eye-tracking data. Our models resulted in an average accuracy of 87.60% on eye-tracking data of questions that the classifier has seen before and an average of 72.87% on questions that the classifier has not yet seen. The results confirmed that eye movement, especially fixation duration, contains essential information on the difficulty of the questions and it is sufficient to build machine-learning-based models to predict difficulty level. Full article
Show Figures

Figure 1

Review

Jump to: Research

26 pages, 2682 KiB  
Review
Multimodal Data Fusion in Learning Analytics: A Systematic Review
by Su Mu, Meng Cui and Xiaodi Huang
Sensors 2020, 20(23), 6856; https://doi.org/10.3390/s20236856 - 30 Nov 2020
Cited by 41 | Viewed by 8059
Abstract
Multimodal learning analytics (MMLA), which has become increasingly popular, can help provide an accurate understanding of learning processes. However, it is still unclear how multimodal data is integrated into MMLA. By following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, [...] Read more.
Multimodal learning analytics (MMLA), which has become increasingly popular, can help provide an accurate understanding of learning processes. However, it is still unclear how multimodal data is integrated into MMLA. By following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, this paper systematically surveys 346 articles on MMLA published during the past three years. For this purpose, we first present a conceptual model for reviewing these articles from three dimensions: data types, learning indicators, and data fusion. Based on this model, we then answer the following questions: 1. What types of data and learning indicators are used in MMLA, together with their relationships; and 2. What are the classifications of the data fusion methods in MMLA. Finally, we point out the key stages in data fusion and the future research direction in MMLA. Our main findings from this review are (a) The data in MMLA are classified into digital data, physical data, physiological data, psychometric data, and environment data; (b) The learning indicators are behavior, cognition, emotion, collaboration, and engagement; (c) The relationships between multimodal data and learning indicators are one-to-one, one-to-any, and many-to-one. The complex relationships between multimodal data and learning indicators are the key for data fusion; (d) The main data fusion methods in MMLA are many-to-one, many-to-many and multiple validations among multimodal data; and (e) Multimodal data fusion can be characterized by the multimodality of data, multi-dimension of indicators, and diversity of methods. Full article
Show Figures

Figure 1

Back to TopTop