Zum Hauptinhalt springen

Effects of Different Types of Cues and Self-Explanation Prompts in Instructional Videos on Deep Learning: Evidence from Multiple Data Analysis

Zheng, Xudong ; Ma, Yunfei ; et al.
In: Educational Technology Research and Development, Jg. 71 (2023-06-01), Heft 3, S. 807-831
Online academicJournal

Effects of different types of cues and self-explanation prompts in instructional videos on deep learning: evidence from multiple data analysis 

The purpose of this study was to investigate the effects of different types of cues and self-explanation prompts in instructional videos on intrinsic motivation, learning engagement, learning outcomes, and cognitive load, which were indicators to measure deep learning performance. Seventy-two college students were randomly assigned to one of the six conditions in a 3 × 2 factorial design with cues (visual vs. textual vs. combined textual-&-visual) and self-explanation prompts (prediction vs. reflection) as the between-subjects factors. To measure participants' learning engagement, Neurosky mindwave mobile and Tobii pro X3-120 eye-tracker were used to collect their brain wave data and eye movement data, respectively. Learning outcomes were measured with retention and transfer tests, and questionnaires were used to measure participants' intrinsic motivation and cognitive load, respectively. The results revealed that the textual cues significantly facilitated learning outcomes and learning engagement—attention–while the reflection prompts significantly affected learning engagement—the mean fixation duration—and cognitive load. Notably, the combination of textual cues and reflection prompts and the combination of visual cues and prediction prompts allowed the participants to focus and engage in the video learning process more deeply, resulting in a significantly higher learning outcome than their peers from other conditions. This research could provide some implications for designing short instructional videos to facilitate deep learning.

Keywords: Cues; Self-explanation prompts; Deep learning; Cognitive processing; Instructional video

Copyright comment Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Introduction

Nowadays, instructional video has become one of the most common and powerful digital learning resources. Especially the instructional video with the feature of short duration to present information visually and vividly, which learners of all ages favor (Wang et al., [67]; Zhu et al., [76]). Notably, the high-quality instructional videos could guide learners' attention to specific information in the complex dynamic materials being shown, helping them build a complete psychological model and accurately understand the objects, knowledge, and causality in visual representation (De Koning et al., [22]; Wang et al., [67]). That is, well-designed instructional videos can bring deeper learning performance, which is learners could produce a positive and internal incentive experience, so naturally, they with a deep focus will be more engaged and facilitate their knowledge retention and transfer ability (Aguiar-Castillo et al., [1]; Baeten et al., [7]; Grover et al., [28]). However, when students use instructional videos with challenging content or unreasonable presentation of the complex content if without providing them the necessary learning scaffold, they may have a high cognitive load, mind straying, and surface cognitive activities. For instance, Bayraktar et al. (2019) found that the instructional videos without necessary cues related to the critical information easily lead to students ignoring them instead of focusing on the entire materials being shown, resulting in change blindness, which is not beneficial to deeper learning performance. According to Instructional Transaction Theory, high-quality instructional videos should guide students to pay attention to critical information and foster them to engage actively in deep cognition activities and construct meaning (Merrill, [48]). Therefore, embedding some necessary prompting signals for the vital information in instructional videos development or providing learning scaffolds when students learn with the videos may help promote their deep learning performance.

Previous studies have demonstrated that the cues and self-explanation prompts might potentially facilitate students' learning in the multimedia environment (Lin & Atkinson, [40]; Lin et al., [41]). In addition, Wang et al., ([67]) explored the effects of cues in the short instructional video with eye-tracking technologies. They found that the added cues had improved students' retention knowledge scores. Building on these previous researches in the current study, we investigate the effects of different cues (visual vs. textual vs. combined textual-&-visual) and self-explanation prompts (prediction vs. reflection) on students' deep learning in instructional videos. Moreover, we adopt a multiple data method to evaluate the impact of these two strategies, cues and self-explanation prompts, on students' deep learning performance, such as intrinsic motivation, learning engagement (e.g., cognitive, emotional, and behavioral engagement), learning outcomes, and cognitive load.

Deep learning

In recent decades, educators and researchers have paid much attention to promoting the effective occurrence of students' deep learning. To clearly understand the deep learning approach, they usually compared it with the surface learning approach that aims to rot memorization or simple recollection of learning content. The occurrence of deep learning requires students to understand and construct meaning critically, seek principles to organize learning information, find relationships among learning materials, and apply knowledge to new situations smoothly (Akyol & Garrison, [2]; Filius et al., [25]; Koszalka et al., [37]). However, students learning with a surface approach generally remember existing information, absorb new information at a primary level, and understand knowledge at a superficial level, but rarely involve high-level thinking activities (Offir et al., [54]). Notably, previous studies have reported that students were likely to produce high-quality learning outcomes with the deep learning approach in digital learning settings. For instance, Chen et al., ([17]) used the collaborative concept mapping as an effective learning scaffold strategy to foster the occurrence of deeper learning. They found that students had a more profound exchange of ideas, better knowledge convergence, and higher problem-solving skills as a result of participating in deep learning with collaborative concept mapping strategy. Furthermore, Wang et al., ([66]) established that deep learning triggered by problem-solving could enable students to manage complexity and challenges effectively and further promote their high-level learning participation and performance.

Deep learning performance is involved in both internal cognitive processing and external learning outcomes for students. Wang et al., ([66]) stated that deep learning is characterized by a high level of engagement in learning, driven by intrinsic motivation sustained engagement and achieving a high level of understanding and performance. Previous studies have found that students who use a deep learning approach may have more internal motivation, driving them to seek and obtain meaning with a high level of engagement and achieve a deeper understanding and good performance (Lu et al., [43]; Marton & Säljö, [44]). Vos et al., ([64]) and Chen, ([16]) found that students' intrinsic motivation and learning engagement positively related to their deep learning, separately. Moreover, Lei et al. ([39]) found that learning engagement was crucial for deep and productive learning. That is, students who had higher learning engagement might gain more learning achievement. In sum, students' intrinsic motivation level and learning engagement might serve as the essential indicators of their deep learning occurring (Biggs, [10]; Chen, [16]; Wang et al., [66]).

Cognitive load theory reveals the effect of cognitive load on the depth of learning processes and learning performance. When learners engage in deep learning occurs, if the information in the learning materials is overloaded or the complexity of the presentation or the difficulty of the learning task exceeds the learners' cognitive ability, a cognitive overload will occur, which may decrease learning performance.

Prior research work also found that intrinsic motivation and cognitive load should be considered and measured because they contribute differently to students' learning process and outcomes in multimedia environments (Lin et al., [41]). Therefore, the current study evaluated cognitive load as another essential indicator to measure deep learning performance. Furthermore, researchers also generally used knowledge retention and transfer tests to measure and assess students' learning outcomes (Schneider et al., [61]; Wang et al., [67]). In the research on the effects of cueing, Arslan-Ari et al., ([4]) used these tests to measure the impacts of prior knowledge and cues on students' learning and mental effort. Wang et al., ([67]) used the scores of retention and transfer tests to measure students' learning outcomes when exploring the effects of short instructional videos embedded clues. Ozcelik et al., ([55]) also combined the eye-tracking data and scores of retention and transfer tests to understand students' deep learning process and learning performance. Thus, students' retention and transfer test scores might be important indicators for evaluating students' deep learning outcomes in the current study.

Cues and cueing effect in learning

According to the attention-guiding principle, also as known signal principle, cues are the information purposefully added in instructional materials to draw students' attention and provide visual scaffolding for better learning, such as highlighting the essential elements by coloring and text-picture (Juliane et al., [35]; Schneider et al., [61]; Wang et al., [67]). Cognitive Theory of Multimedia Learning holds that attention-guiding features (cues) should be used in instructional materials to promote the selection of essential learning information in the learning process. According to Mayer, ([46]), there are three types of cues in instructional animations: textual cues, visual cues, and combined textual-&-visual cues. Textual cues are composed of the explanatory text inserted into the learning materials, while visual cues mainly use direction arrow, color, "searchlight," and dynamic gestures. Visual and textual cues are often combined to guide attention to different learning materials (Xie et al., [70]). In addition, Mayer, ([47]) named the effective impact of these cues on improving learning outcomes as the cueing effect.

Studies showed that cues might positively affect learning (Arslan-Ari et al., [4]; Brasel & Gips, [13]; De Koning et al., [22]; Lin et al., [41]). Lin et al. (2011) added red arrows as visual cues to learning animations materials and proved that such signals could effectively improve learning effects and shorten learning time. Wang et al. ([67]) used eye-tracking approach and the retention and transfer tests to explore the functions and designs of added cues in short instructional videos. They found that the instructional videos added visual cues could promote deep cognitive processes and improve students' scores on knowledge retention and transfer tests. Yung and Paas ([75]) studied the effect of a teaching agent, a kind of cue, which prompts essential information in instruction animation about the cardiovascular system. Still, they did not find that the alert positively impacts students' learning and maintenance of cognitive load. A meta-analysis comprising 95 studies of cueing effect in learning with media showed that the cues benefit students' learning achievement (Schneider et al., [61]). However, Johnson et al. ([34]) found that multimedia materials with visual cues (arrow pointing) could speed up learning and reduce cognitive load but had no significant impact on academic performance. Plass et al. ([57]) did not find gray visuals and warm-color-cued visuals significantly affect an individual's cognitive load or intrinsic motivation. Lin et al. ([41]) also found that cueing had no significant effect on students' intrinsic motivation and cognitive load.

Although appropriate using cues (e.g., textual cues, visual cues, and combined textual-&-visual cues) could promote learners' attention, integration, and deep processing of important information, the empirical results of these studies still raise doubts about the effective occurrence of cues in promoting deep learning. Some researchers believe that these cues may effectively attract learners to pay attention to crucial learning information but could not enhance learners' engagement, making it challenging to promote the significant occurrence of deep learning (De Koning et al., [21]; Lin et al., [41]). For instance, van der Meij and de Jong, ([63]) argue that cues as relating representations on a surface level support learners, but gaining a deeper understanding for knowledge by reasoning with these cueing requires additional cognitive support. Therefore, incorporating cognitive strategies as instructional aids for cueing in learning materials should be considered to facilitate learners' mental model construction, which may be beneficial to foster deeper cognition and better learning outcomes. It is noted that self-explain prompts may be effective cognitive strategies to scaffold learners' engagement in cognitive processes and deep learning with instructional videos (Roy & Chi, [59]; Yeh et al., [73]; van der Meij et al., 2011).

Self-explanation prompts enhance learning processes

Chi et al. ([19]) put forward the concept of self-explanation when he analyzed the difference between "excellent students" and "poor students" in the process of problem-solving in physical learning through experiments. Fonseca and Chi ([26]) believed that self-explanation is a constructive or in-depth cognitive activity in which learners respond effectively to prompts (additional non-knowledge information) to understand new content deeply or promote the meaning construction effectively. That is, self-explanation prompts are considered essential instructional aids to scaffold and encourage learners to engage in cognitive construction to facilitate deep and powerful learning through predication, reflection, reasoning, and revision of mental models (Chi, [18]; Chi et al., [20]; Roy & Chi, [59]). Therefore, the appropriate self-explanation prompts have the potential to trigger and improve learning engagement which enables learners to engage in deep learning processes. Previous research works, including a meta-analysis (Bisra et al., [11]), provide some positive evidence about the self-explanation prompts in enhancing the depth of learning processes and improving learning performance (Moreno & Mayer, [52]; van der Meij et al., 2011; Lin et al., [41]; Park et al., [56]).

Researchers pointed out that the types of self-explanation prompt, the presentation of prompting questions differently, may affect the inducing degree of self-explanation differently and thus affect learners' deep cognitive processing and deep learning to varying degrees (Atkinson et al., [5]; Kent et al., 2007; Yeh et al., [73]). For instance, Nokes et al. ([53]) designed gap-filling prompts and mental-model revision prompts to support students' problem-solving in physics subjects. Roy and Chi ([59]) also put forward several types of prompts, including open-ended prompt, focused prompt, scaffolded prompt, resource-based prompt, and menu-based prompt to engage students learning successfully from kinds of instructional materials. Notably, self-explaining is intrinsic cognitive construction activity (Roy & Chi, [59]), which means the prompting questions when to present and timely observing their impact on learners' meaningful understanding and knowledge construction is crucial.

Researchers generally refer to prompting questions given at the beginning of a related instruction as prediction prompts and questions that learners need to explain immediately after completing the instructional task as reflection prompts (Hajian et al., [29]; Lin et al., [41]). In the current study, prediction prompts were presented to students before delivering the instructional video content, which may help them activate their prior knowledge and ideas related to a specific concept and subsequently examine them in more detail (Hajian et al., [29]). While reflection prompts were presented after students had completed instructional video watching, which guides them to summarize and reflect on what they have learned or even work to resolve cognitive conflicts between old and new knowledge.

Previous researches confirmed that prediction prompts and reflection prompts positively affected students' cognitive processing, respectively (Hegarty et al., [31]; Moreno & Mayer, [52]). Besides, Lin et al., ([41]) investigated the impact of different types of self-explanation prompts (prediction prompts vs. reflection prompts vs. no prompts) and visual cueing on learning outcomes in a multimedia environment. They found that visual cueing significantly impacts learning outcome scores, but self-explanation prompts had no significant effect. It is noted that this result may be caused by the fact that multimedia learning materials are complex, dynamic, and transient (Wang et al., [67]), so it isn't easy to track and analyze how cues and self-explanation prompts affect the deep learning process. Much more research is needed to understand the effect of different types of cues and self-explanation prompts on cognitive processes and deep learning performance by analyzing multiple data, such as including learners' brain wave data and eye movement data.

Overview of the current study

This study investigates the effects of cues (visual vs. textual vs. combined textual-&-visual) and self-explaining prompts (prediction vs. reflection) used with the instructional videos on promoting students' deep learning. Specifically, this study addressed the following research questions (RQs):

  • RQ1: Which type of cues (visual vs. textual vs. combined textual-&-visual) have more significant positive effects on deep learning performance?
  • RQ2: Which type of self-explanation prompts (prediction vs. reflection) have more significant positive effects on deep learning performance?
  • RQ3: Does any interaction effect exist for each deep learning performance measure among cues (visual, textual vs. combined textual-&-visual) by self-explanation prompts (prediction vs. reflection)?

This study manipulated two independent variables: cues (visual, textual vs. combined textual-&-visual) and self-explanation prompts (prediction vs. reflection). The different combinations of cues and self-explaining prompts were embedded into learning materials (an instructional video about ozone), respectively, to attract students' learning attraction, foster learning engagement, and stimulate the occurrence of deep learning activities. To examine how the combination of self-explaining prompts and cues affected the deep learning process and learning performance, we used four indicators as dependent variables to measure and interpret the effects: intrinsic motivation, learning engagement, learning outcomes (retention and transfer scores), and cognitive load.

Method

Participants and design

N = 72 Chinese college students (50 females, 12 males; Mean [M] age = 23, Standard Deviation [SD] = 2.2, 18—25 years) took part in the study, which lasted about 40 min, respectively. The participants were majoring in educational technology or software engineering and had normal hearing and normal or corrected-to-normal vision. They had basic computer skills, but they were not familiar with instrument content. To ensure that participants take part seriously and obtain accurate experimental data, they will receive a small stipend after carefully completing the entire experiment.

This study used a 3 (visual vs. textual vs. combined textual-&-visual) × 2 (prediction vs. reflection) between-subjects design. The participants were randomly assigned in equal numbers (N = 12) to one of the six conditions (See Table 1).

Table 1 Six conditions and explanations

Conditions

Explanations

VC-PP

Add a black direction arrow to point to the key information

Deliver prediction questions related to the key information before learning a video

VC-RP

Add a black direction arrow to point to the key information

Deliver prompting questions related to the key information after finishing learning a video

TC-PP

Type in explanatory text in white on a red background to enhance the visual effect

Deliver prediction questions related to the key information before learning a video

TC-RP

Type in explanatory text in white on a red background to enhance the visual effect

Deliver prompting questions related to the key information after finishing learning a video

CTVC-PP

Add a black direction arrow to point to the key information, and type in explanatory text in white on a red background to enhance the visual effect

Deliver prediction questions related to the key information before learning a video

CTVC-RP

Add a black direction arrow to point to the key information, and type in explanatory text in white on a red background to enhance the visual effect

Deliver prompting questions related to the key information after finishing learning a video

Visual Cues—Prediction Prompts, VC-PP Visual Cues—Reflection Prompts, VC-RP Textual cues—Prediction Prompts, TC-PP Textual cues—Reflection Prompts, TC-RP Combined Textual-&-Visual Cues—Prediction Prompts, CTVC-PP Combined Textual-&-Visual Cues—Reflection Prompts, CTVC-RP

Learning materials

The original experimental learning materials were selected from the short instructional video of Fuse School (www.fuseschool.org), lasting for 4 min and 10 s, which delivered an instructional unit about the ozone in the atmosphere. Specifically, the instructional video covered the following topics in a sequence: (a) the different characteristics of ozone in the stratosphere and troposphere, (b) the formation process of ozone in the stratosphere, (c) the uneven distribution of ozone in the atmosphere, (d) and the reason why most of the ozone in the troposphere becomes a pollutant. Considering the participants were native Chinese speakers, the English text elements of the original English instructional video were translated into Chinese by two graduate students proficient in English and dub the video into Chinese. Then, the different combinations of cues and self-explanation prompts were embedded into the six instructional animations, respectively (See Table 1). Cues appeared in the relevant animation area in each condition when the voiceover explained the eleven key instructional content. According to the topics in the teaching animation, the same type of self-explanation prompts was embedded at the beginning (prediction prompts) or end (reflection prompts) of each topic, a total of four in each condition.

Measures and instruments

Demographic and prior knowledge questionnaire

The demographic (including gender, age, and major) and prior knowledge questionnaire designed in the study was mainly intended to obtain the basic information of participants and investigate participants' prior knowledge about the instructional content—ozone, to eliminate the interference of the participants with high prior knowledge on the experimental results. The prior knowledge questionnaire consisted of twenty basic questions related to ozone selected from an environmental protection knowledge contest, including five blank questions (e.g., the leading gases causing ozone layer destruction are____), eight multiple-choice questions (e.g., which of the following dates is international ozone layer protection day? A.6 December B.6 November C.6 October D.6 September), and seven judgment questions (e.g., ozone is a light blue gas with a particular smell.). Each question in the questionnaire was scored 0 for an incorrect answer or 5 for a correct answer, giving a total score of 100. If the test score were more than 65, the participant would be judged to know more about the instructional content and removed from the participant list. The internal consistency Cronbach's alpha for the questionnaire and the difficulty of the questionnaire were 0.72 and 0.40, respectively. There were no significant differences (F(1,5) = 0.026, p > 0.05) among the six condition groups, which included the VC-PP condition (M = 40.83, SD = 13.95), VC-RP condition (M = 40.00, SD = 10.00), TC-PP condition (M = 41.25, SD = 13.67), TC-RP condition (M = 40.42, SD = 12.87), CTVC-PP condition (M = 40.97, SD = 13.75) and CTVC-RP condition (M = 40.00, SD = 11.83).

Measure of intrinsic motivation

We adapted a five-item instrument for measuring intrinsic motivation from the measurement developed by Ryan ([60]), including five sub-scales: interest, competence, value, effort, and pressure. The instrument was an eight-point Likert scale, with a total of five items, ranging from 1 ("Strongly disagree") to 8 ("Strongly agree") (See Table 2). The internal consistency of Cronbach's alpha for the instrument used in this study was 0.74, implying that it had good reliability.

Table 2 Intrinsic motivation items

Items

Subscales

1. I think this activity was fun to do

Interest

2. I think I was good at this activity

Competence

3. I think doing this activity could be beneficial to me

Value

4. I put a lot of effort into this

Effort

5. I felt very nervous while doing this activity

Pressure

Measure of learning engagement

Learning engagement refers to the "quantity and quality of mental resources directed at an object and the emotions and behaviors entailed" when individuals learn (Miller, [49], p. 31). Previous researchers have collected participants' physiological data by using an EEG detector or/an eye tracker to measure their learning engagement (Baceviciute et al., [6]; Dubovi, [24]; Wang et al., [65]). For example, Liu et al., ([42]) used the Tobii X120 eye tracker to collected students' gaze time data on learning materials to measure their learning engagement. Baceviciute et al., ([6]) incorporated eye-tracking and EEG to investigate participants' cognitive engagement during learning in a virtual reality environment. To measure participants' learning engagement in the current study, we applied a portable EEG detector, Neurosky mindwave mobile, and Tobii Pro X3-120 eye tracker to collect their brain wave data and eye movement data, respectively.

The portable EEG detector—including the sensor contacting the prefrontal lobe, the sensor reaching the earlobe, and the data processing chip—can collect and record four original brain wavebands on a real-time basis: alpha, beta, theta, delta with a sampling rate of 120 Hz and a signal accuracy of 0.25uV. In addition, we used the software kit Mindxp to convert the collected brain wave data into attention and meditation values ranging from 0 to 100 through eSense, a proprietary algorithm for representing mental states to analyze the collected brain wave data. It is noted that attention values reflect the degree of participants' current mental concentration level (cognitive engagement), while meditation values demonstrate the degree of mental relaxation level (emotional engagement) (Liu et al., [42]; Wang et al., [65]; Yang et al., [72]).

Tobii Pro X3-120 eye tracker with a sampling rate of 120 Hz was pasted under an HP display with a resolution of 1920 × 1080 to capture participants' eye movement data during the experiment. In general, Tobii studio 3.4.5 software was used to perform the calibration process and analyze the eye movement data, such as fixation allocation, fixation duration, fixation count, saccade, and a heatmap. This study marked the region of cues within the instructional video in each condition as the areas of interest (AOI). We explored two eye movement indicators: (a) the total fixation count of AOIs—the sum of fixation in the AOIs of participants from an experimental condition, reflecting participants' familiarity and attention degree to instructional materials. (b) The mean fixation duration of AOIs—the average time of all the fixation durations in the AOIs, reflecting the degree of participants' cognitive processing of learning materials. Previous studies have shown that these two eye movement indicators, which represent the participants' behaviors of selecting, attention, and processing information in learning material, can more accurately measure learning engagement (behavioral engagement) (Kaakinen, [36]; Wang et al., [67]; Yue et al., [74]).

Measure of learning outcomes

In the study, we measured the learning outcomes in terms of the retention test and the transfer test, both developed by two experienced science instructors. The retention test mainly measured the extent of the participants' memory and mastery of declarative knowledge, explicitly delivered in instructional animation. The retention test questions are closely related to the instructional video content emphasized by the cues and/or self-explanation prompts, reflecting the impact on cognitive processes and depth of meaning construction (Ponce & Mayer, [58]; Wang et al., [67]). The test consisted of twenty questions, including five blank questions (e.g., ozone in the stratosphere can absorb __ from the sun and is the earth's protective umbrella.), eight multiple-choice questions (e.g., which of the following is the reason for less ozone distribution at the stratospheric tropospheric junction? A. no oxygen, B. six less oxygen, C. no ultraviolet rays, and D. six less ultraviolet rays), and seven judgment questions (e.g., ozone in the stratosphere can absorb 80% of the sun's ultraviolet rays and is the earth's protective umbrella.). Each question in the test was scored 0 for an incorrect answer or 5 for a correct answer, giving a total score of 100. In addition, the retention test's scoring procedures were consistent with those of the prior knowledge test, but the questions were different.

The transfer test measured the extent to that participants applied declarative knowledge to solve problems that were not expressly explained by instructional animation. The test consisted of five short questions—four questions related to self-explanation; another question needed participants to consider the instructional content before answering what they learned-closely associated with the self-explanation prompts. Two science instructors with deep understandings of ozone knowledge scored the transfer test according to the corresponding scoring standards. The scorers' consistency coefficient was above 0.88.

Measure of cognitive load

We adapted a five-item instrument for measuring cognitive load from the NASA-TLX developed by Hart and Staveland ([30]), including five sub-scales: interest, competence, value, effort, and pressure. The instrument was an eight-point Likert scale, with a total of five items ranging from 1 to 8 (See Table 3). The internal consistency of Cronbach's alpha for the instrument used in this study was 0.75, implying that it had good reliability.

Table 3 Cognitive load items

Items

Subscales

Scale

1. How much mental activity was required to accomplish the task? (e.g., thinking, deciding, remembering, calculating, etc.)

Mental Demand

1: Easy

8: Harsh

2. How leisure or urgent did you feel about your learning speed in the task?

Temporal Demand

1: Leisure

8: Urgent

3. How much effort did you make to understand the contents of video-based animation?

Effort

1: Low effort

8: High effort

4. How satisfied did you feel with your performance in the task?

Performance

1: Not satisfied at all

8: Very satisfied

5. How insecure, discouraged, irritated, stressed, and annoyed did you feel during the learning task?

Frustration Level

1: Not frustrated at all

8: Very frustrated

Procedure and data collection

The experiment was carried out in a learning science laboratory of a Chinese college. At the beginning of the study, participants are required to sign a participation consent form. After each participant sat in front of a computer in the laboratory, the researchers briefed them on the experimental process. The experiment procedure included four steps in this study, as shown in Fig. 1.

Graph: Fig. 1Experiment design. Abbreviation: Visual Cues—Prediction Prompts, VC-PP; Visual Cues—Reflection Prompts, VC-RP; Textual Cues—Prediction Prompts, TC-PP; Textual Cues—Reflection Prompts, TC-RP; Combined Textual-&-Visual Cues—Prediction Prompts, CTVC-PP; Combined Textual-&-Visual Cues—Reflection Prompts, CTVC-RP

In the first step, the participants completed the demographic questionnaire and took the prior knowledge test. And then, they were randomly assigned to one of six experimental conditions. In the second step, the researchers helped participants wear EEG headsets and adjust their sitting posture using the 9-point calibration method. To ensure eye movement data capturing effectively, participants sited at approximately 65 cm from the display and did not move their heads as much as possible. In the third step, before carrying out the main experimental task, participants took a warm-up task unrelated to ozone in 5 min to familiarize themselves with the learning setting. In the fourth step, participants should complete the main experimental task with the instructional video within 10 min. During the main experiment, participants' mindwave data (attention and meditation data) and eye movement data (total fixation count and mean fixation duration data) were collected by an EEG headset and an eye tracker, respectively. In the fifth step, participants independently completed the retention test, transfer test, intrinsic motivation questionnaire, and cognitive load questionnaire. After submitting all the questionnaires, participants were thanked for their participation. Each participant took about 40 min to complete the whole experiment in this study.

Data analysis

We first checked the results for normality by computing the Shapiro–Wilk normality test and determined that the data sets were normally distributed (ps > 0.05). Then we applied the Statistical Package for Social Science 24.0 to process the experimental data—the usage of the two-way analysis of variance (ANOVA) analysis assessment of the potential effects of the cues and self-explanation prompts on the indicators of deep learning as follows: intrinsic motivation, learning engagement (the total fixation count and the mean fixation duration), learning outcomes (retention test scores and transfer test scores), and cognitive load. In addition, After the participants finished the learning activities, the participants' brainwave states of attention and meditation were coded, as shown in Table 4 and Table 5. According to Bakeman and Gottman ([8]) and Yang et al. ([72]), we used the sequential analysis method (SAM) to analyze the codes of attention and meditation.

Table 4 Attention coding

Coding

Attention value

Explanation

A1

1–19

Very low level of attention

A2

20–39

Low level of attention

A3

40–59

Neutral level of attention

A4

60–79

High level of attention

A5

80–100

Very high level of attention

Table 5 Meditation coding

Coding

Meditation value

Explanation

A1

1–19

Very low level of Meditation

A2

20–39

Low level of attention

A3

40–59

Neutral level of Meditation

A4

60–79

High level of Meditation

A5

80–100

Very high level of Meditation

Results

Intrinsic motivation

We used Levene's test for equality of variances to assess whether the homogeneity of variance on the intrinsic motivation scores of groups met homoscedasticity. After determining that Levene's test results for homogeneity of variance met homoscedasticity with F(5, 66) = 2.020, p = 0.087, a two-way ANOVA was conducted to examine the potential effects of cues and self-explanation prompts on intrinsic motivation. The results showed that neither of the two main effects was significant: for the main effect of cues, F(2, 66) = 0.739, p = 0.481, η2p = 0.022, and for the effect of self-explanation prompts, F(1, 66) = 0.657, p = 0.420, η2p = 0.010. In addition, the interaction effects between cues and self-explanation prompts on intrinsic motivation were not significant, F(2, 66) = 0.979, p = 0.381, η2p = 0.029.

Learning engagement

Analysis of total fixation count and mean fixation duration by eye-tracker

Levene's test results on the total fixation count of groups met homoscedasticity, F(5, 66) = 1.369, p = 0.247. Then we conducted two-way ANOVA to examine the total fixation count of AOIs and the mean fixation duration of AOIs. The results indicated that there were neither of the two main effects on total fixation count was significant: for the main effect of cues, F(2, 66) = 0.274, p = 0.761, η2p = 0.008, and for the main effect of self-explanation prompts, F(1, 66) = 0.001, p = 0.977, η2p = 0.001. Moreover, the interaction effects between cues and self-explanation prompts on total fixation count were significant, F(2, 66) = 3.907, p = 0.025, η2p = 0.106). We follow-up performed the simple effects and found that the participants from the VC-PP condition (M = 109.83, SD = 23.74) have a significantly higher total fixation count (ps < 0.05) than those from the other conditions—VC-RP condition (M = 83.58, SD = 27.89), TC-PP condition (M = 86.75, SD = 26.75), CTVC-PP condition (M = 90.00, SD = 20.04), TC-RP condition (M = 97.25, SD = 23.52), and CTVC-RP condition (M = 105.17, SD = 41.60), respectively (See Fig. 2).

Graph: Fig. 2Interaction between self-explanation prompts and visual cues on total fixation count and mean fixation duration. Abbreviation: Visual Cues, VC; Textual Cues, TC; Combined Textual-&-Visual Cues, CTVC; Prediction Prompts, PP; Reflection Prompts, RP

After Levene's test results on the mean fixation duration of groups met homoscedasticity with F(5, 66) = 2.029, p = 0.086, we conducted a two-way ANOVA. The results showed that there were main effects of cues, F(2, 66) = 10.939, p < 0.001, η2p = 0.249, and self-explanation prompts, F(1, 66) = 11.256, p = 0.001, η2p = 0.146, on mean fixation duration. A follow-up Least Significant Difference (LSD) test found the results as follows. (a) The mean fixation duration of participants from the VC condition (M = 0.31, SD = 0.13) was significantly longer than those from the TC condition (M = 0.25, SD = 0.10), and CTVC condition (M = 0.19, SD = 0.06). (b) The mean fixation duration of participants from the TC condition (M = 0.25, SD = 0.10) was significantly longer than those from the CTVC condition (M = 0.19, SD = 0.06), (See Fig. 3). (c) The mean fixation duration of participants from the RP condition (M = 0.29, SD = 0.13) was significantly longer than those from the PP condition (M = 0.21, SD = 0.07).

Graph: Fig. 3Mean of mean fixation duration of three cue conditions. Abbreviation: Visual Cues, VC; Textual Cues, TC; Combined Textual-&-Visual Cues, CTVC

Moreover, the interaction effects between cues and self-explanation prompts on the total fixation count were significant, F(2, 66) = 3.907, p = 0.025, η2p = 0.106. We follow-up performed the simple effects and found that the participants from the VC-RP condition (M = 0.39, SD = 0.12) have a significantly longer mean fixation duration than those from the other conditions (See Fig. 2).

Analysis of attention and meditation measured by EEG

This study used the EEG to collect and record participants' mindwave data (attention and meditation) during the experimental learning activities. According to Yang et al. ([72]) and Hou et al. ([32]), the sequential analysis method could analyze the message codes—participants' attention and meditation—and describe the code sequences with significance (Z) of the Z factor binomial test. Each node represented a code in a message code transfer diagram, and lines indicated the significance levels where the significant Z > 1.96 (Bakeman & Gottman, [8]), and the arrow pointed to transfer directions. To observe the significant code transfer visually, we drew the transition diagrams of attention and meditation, respectively.

As shown in Fig. 4, participants from the TC-PP condition and TC-RP condition continuously stayed in "high level of attention" (A4) and "very high level of attention" (A5) states with the significant Z = 3.61 and 2.50, respectively. It is noted that participants from the TC-RP condition did not have a "very low level of attention" (A1) states. While participants from the other four conditions all stayed in had significantly stable states of "very low level of attention" (A1) and "low level of attention" (A2). Especially, participants from the VC-PP condition and CTVC-RP condition continuously stayed in "very low level of attention" (A1) states with the significant Z = 3.61 and 3.05, respectively. That is, the mind wandered continuously appeared during their learning activities. In sum, the participants from the TC-RP condition easily raised the high-level attention and could constantly maintain it.

Graph: Fig. 4The attention transformation diagram of the six conditions. Abbreviation: Visual Cues—Prediction Prompts, VC-PP; Visual Cues—Reflection Prompts, VC-RP; Textual Cues—Prediction Prompts, TC-PP; Textual Cues—Reflection Prompts, TC-RP; Combined Textual-&-Visual Cues—Prediction Prompts, CTVC-PP; Combined Textual-&-Visual Cues—Reflection Prompts, CTVC-RP

Brainwave meditation can represent participants' level of mental calmness or relaxation. When the meditation value increased, the participant was in a relaxed state, and when it decreased, the participant was in a stressed state. It is noted that the meditation value is too high (high level of relaxation) or too low (high level of stress) is not necessarily conducive to the learning process. The meditation states moderate tension, relaxation, or even neutral are more beneficial to the participants engaged in the learning activities.

As shown in Fig. 5, the mediation of participants from the TC-PP condition and TC-RP condition more concentrated the transitions between M3—the natural level of meditation—and M4—the high level of meditation, which showed that the participants could maintain moderate relaxation states. Especially, the participants from the TC-RP condition did not have a "very high level of mediation" (M5) states, which represented they had the most stable mediation, compared to other participants. The participants from the condition of VC-PP, VC-RP, and CTVC-RP have the concentrated transitions between M4—the high level of meditation—and M5—the very high level of meditation. Notably, participants from the condition of CTVC-PP continuously stayed in "very high level of meditation" (M5) states with the significant Z = 2.66. That is, the mind wandered might constantly appear at some stage in their learning activities. In sum, the participants from TC-PP and TC-RP conditions easily raised and kept moderate meditation transition states—the natural state or a high level of relaxation.

Graph: Fig. 5The meditation transformation diagram of the six conditions. Abbreviation: Visual Cues—Prediction Prompts, VC-PP; Visual Cues—Reflection Prompts, VC-RP; Textual Cues—Prediction Prompts, TC-PP; Textual Cues—Reflection Prompts, TC-RP; Combined Textual-&-Visual Cues—Prediction Prompts, CTVC-PP; Combined Textual-&-Visual Cues—Reflection Prompts, CTVC-RP

Learning outcomes

Levene's test results on the retention and transfer scores of groups met homoscedasticity with F(5, 66) = 0.991, p = 0.430, and with F(5, 66) = 0.532, p = 0.752, respectively. Then two-way ANOVA analyses were conducted to evaluate the effects of cues and self-explanation prompts on the retention and transfer test scores.

The results indicated that there were main effects of cues, F(2, 66) = 5.012, p = 0.009, η2p = 0.132, on the retention test scores. A follow-up LSD test found that a) participants from TC condition (M = 59.58, SD = 10.73) scored significantly higher than those from VC condition (M = 49.79, SD = 8.91), and CTVC condition (M = 51.88, SD = 13.42), respectively (see Fig. 6). However, the main effect of self-explanation prompts, F(1, 66) = 3.518, p = 0.30, η2p = 0.016, and the interaction between cues and prompts, F(2, 66) = 0.248, p = 0.781, η2p = 0.007, were insignificant.

Graph: Fig. 6Mean of the retention test and the transfer test of three cue conditions. Abbreviation: Visual Cues, VC; Textual Cues, TC; Combined Textual-&-Visual Cues, CTVC

A two-way ANOVA results indicated that there were main effects of cues, F(2, 66) = 3.527, p = 0.035, η2p = 0.097, on the transfer test scores. A follow-up LSD test found that participants from TC condition (M = 46.27, SD = 13.61) scored significantly higher (ps < 0.05) than those from CTVC condition (M = 35.69, SD = 14.74). In addition, the interaction effects between cues and self-explanation prompts on the transfer test score were significant, F(2, 66) = 4.960, p = 0.010, η2p = 0.131 (See Fig. 7). We follow-up performed the simple effects and found the results that the participants from VC-PP condition (M = 49.96, SD = 15.06) and TC-RP condition (M = 50.63, SD = 15.61) scored significantly higher (ps < 0.05) than those from VC-RP condition (M = 34.50, SD = 11.94), CTVC-RP (M = 37.29, SD = 14.13), TC-PP (M = 41.92, SD = 10.12), and CTVC-PP (M = 34.08, SD = 15.78), respectively.

Graph: Fig. 7Interaction between two prompts and visual cues on the transfer test. Abbreviation: Visual Cues, VC; Textual Cues, TC; Combined Textual-&-Visual Cues, CTVC; Reflection Prompts, RP; Prediction Prompts, PP

However, the main effect of self-explanation prompts on the transfer test scores were insignificant, F(1, 66) = 0.129, p = 0.720, η2p = 0.002.

Cognitive load

Levene's test results on the cognitive load scores of groups met homoscedasticity, F(5, 66) = 1.521, p = 0.195. Then we conducted a two-way ANOVA to examine the potential effects of cues and self-explanation prompts on cognitive load. There was a significant main effect of self-explanation prompts, F(2, 66) = 7.595, p = 0.008, η2p = 0.103. Participants assigned to the PP conditions (M = 15.42, SD = 4.67) scored significantly lower than those who were assigned to the RP conditions (M = 18.86, SD = 6.01). However, the main effect of cues, F(1, 66) = 2.351, p = 0.103, η2p = 0.066), and the interaction between cues and prompts, F(2, 66) = 0.730, p = 0.486, η2p = 0.22, were insignificant.

Discussion and conclusion

The purpose of this study was to measure the effect of different types of cues (visual vs. textual vs. combined textual-&-visual) and self-explanation prompts (prediction vs. reflection) in instructional videos on participants' deep learning performance. The study adopted multiple data metrics to measure and understand the participants' deep learning performance levels through four indicators—intrinsic motivation, learning engagement, learning outcomes, and cognitive load. Moreover, to measure their learning engagement—attention, meditation, the total fixation, and the mean fixation duration—in the learning process, we used a portable EEG detector and an eye tracker to collect their brain wave data and eye movement data, respectively. The results revealed that the textual cues significantly facilitated learning outcomes and learning engagement—attention-, while the reflection prompts significantly affected learning engagement—the mean fixation duration—and cognitive load. Notably, the combination of textual cues and reflection prompts (TC-RP) and the combination of visual cues and prediction prompts (VC-PP), respectively, allowed the participants to focus and engage in the video learning process more deeply, resulting in a significantly higher learning outcome than their peers from other conditions.

Which type of cues (visual vs. textual vs. combined textual-&-visual) have more significa...

The current study results revealed that using the textual cues significantly facilitated learning outcomes—retention test and transfer test scores—in the instructional video learning activities. The beneficial effect of textual cues on learning outcomes is consistent with some previous studies' results (Boucheix & Lowe, [12]; Canham & Hegarty, [14]; Miller et al., [50]; Stark et al., [62]; Wang et al., [68]). For instance, a multimedia courseware design study also showed that the effect of textual cues on learning outcomes was significant (Wang et al., [68]). However, some previous studies found that visual and textual-&-visual cues have more positive effects than textual cues on students' learning outcomes which were inconsistent with our findings (Boucheix & Lowe, [12]; Wang et al., [67]). A possible reason is that the added cueing text combined with the animations, pictures, and voice-overs related to the learning content in the instructional videos foster the cognitive processes—the selection, organization, and integration of learning information—which was beneficial to the occurrence of deep learning. The Cognitive Theory of Multimedia Learning (Mayer, [46], [47]) provided support to our findings and the possible consequence that the text entering the working memory through an individual's visual channel will combine with the visual representation (e.g., picture, animation, video) and the entering sound (e.g., voice-over) through the auditory channel to form a "phrase," which help learners to process the learning materials deeply and significantly improve the learning effect. In addition, the results from the EEG data in the current study showed that the participants from the textual cues condition maintained a higher level of attention (A 4 and A5), indicating that they engaged in continuous deep cognitive processing, which was beneficial for better learning outcomes.

Our results did not show that different cues significantly impacted intrinsic motivation and cognitive load, consistent with the previous experimental findings—cueing had no obvious direct effect on intrinsic motivation and cognitive load. (Arslan, [3]; Lin et al., [41]). For the insignificant difference between different cues on intrinsic motivation, the possible reason is that volunteer participants from different majors might not be interested and motivated towards learning the content (Arslan, [3]). However, we did not find participants have the insignificant difference of cueing effects and the prompting-by-cueing interaction on intrinsic motivation, which also provided evidence for our above explanation to the results. In terms of cognitive load, the findings are in line with previous studies with the possible reason that the short learning materials were not easy to generate cognitive load, especially for the 4 min—instructional video (De Koning et al., [23]; Kriz & Hegarty, [38]; Lin et al., 2011; Mautone & Mayer, [45]; Yang, [71]). Another possible reason was the participants easily ignored the static embedded cues appearing several times in instructional videos, which could not increase their cognitive load.

Which type of self-explanation prompts (prediction vs. reflection) have more significant posi...

The current study results revealed that the reflection prompts significantly affected learning engagement—the mean fixation duration—and cognitive load, while no significant impact on other indicators of deep learning performance. The mean fixation duration was the crucial positive indicator for measuring the engagement in selecting information during learning processes (Wang et al., [67]). In the current study, the positive effect of reflection prompts on learning engagement is consistent with the findings of some empirical studies (Chen, [15]; Hung et al., [33]). A standpoint of the feedback principle of multimedia learning—providing feedback to guide learners to engage a deeper understanding of learning materials—also supports the benefits of reflection prompts on learning engagement. Moreover, according to the opinion voiced by Hung et al. ([33]), the reflection prompts were beneficial to direct learners to pay more attention deeply in understanding material and promote deep cognitive processes. In addition, compared with the participants using predictive prompts, those who use the reflection prompts have significantly higher learning engagement in the cognitive process, being easy to result in a higher cognitive load. The Cognitive Theory of Multimedia Learning also supports the findings that when learners deeply engage in selecting and integrating information in the cognitive processes during multimedia learning, their germane cognitive load will be increased (Mayer, [47]).

The results showed the insignificant benefits difference between the different self-explanation prompts on learning outcome, consistent with some empirical studies' findings (De Koning et al., [22]; Gerjets et al., [27]; Lin et al., [41]; Moreno & Mayer, [51]). Lin et al. ([41]) hold a possible explanation for the similar findings that the participants might not do any self-explanation during learning actives. In the current study, however, the different types of self-explanation prompts significantly differed on the learning engagement—the mean fixation duration, indicating the participants had used the reflection prompts during their learning activities. Therefore, the possible explanation for the insignificant learning outcome was that the negative effect of higher cognitive load offsets the positive effect of learning engagement on learning outcome.

Does any interaction effect exist for each deep learning performance measure among cues (visu...

The current study found significant interaction effects among different cues and self-explanation prompts on the transfer test scores and learning engagement. Specifically, the combination of textual cues and reflection prompts (TC-RP) and the combination of visual cues and prediction prompts (VC-PP) were significantly beneficial to participants' transfer test scores and learning engagement (e.g., total fixation count and attention, respectively). In the current study, the prediction questions related to the instruction before the video learning might activate participants' relevant prior knowledge and guide them purposefully to select and analyze the information related to the predicted question or the prior knowledge (Lin et al., [41]). What's more, during the participants selecting and analyzing instructional video information, the visual cues could more directly and vividly arouse participants' cognitive attention in purposeful learning information related to the prompting question, which could help them establish a deeper connection between prior and new knowledge, and facilitated the knowledge transfer. In the current study, the participants from the VC-PP condition have a significantly higher total fixation count than those in the other conditions, which also possibly explained that they were engaging in selecting the information related to the predicted prompts with the guidance of visual cues.

On the other hand, the participants from the TC-RP condition were less purposeful during the instructional video learning than those who were delivered the prediction prompts. Therefore, they might have to pay attention to and fully understand the video information during their video learning activities. Notably, the textual cues—using colors in texts or different intonations—could draw the participants' attention to the key locations where the essential learning information was, helping them deeply analyze and understand the information (Wang et al., [67]). So that could explain why the participants from the TC-RP condition continuously stayed in "very high level of attention" (A5) states and the most stable mediation—maintain the moderate relaxation states. What's more, Hung et al. ([33]) and Chen ([15]) pointed out that "reflection prompts can guide students to think deeply, so as to improve their learning performances via connecting their prior concepts with new ideas." That is, in the current study, the reflection prompts provided the participants who used the textual cues crucial opportunities to reorganize and integrate information, facilitated them to deeply understand the relationship between prior and new knowledge, which was helpful for their significant performance in the transfer test.

Limitations and future research

This study examined the effects of different types of cues and self-explanation prompts on students' deep learning performance in instructional videos. Although we collected and analyzed multiple data about the deep learning performance in the current study, there are several limitations. First, the participants were from a particular discipline in a normal university in China, and the number of female participants (50) was larger than that of male participants (12). In future, researchers who plan to repeat the experiment should consider the possibility that participants from other disciplines and/or cultural backgrounds and balance the number of female and male participants. Second, since the wearing EEG headset is unsuitable for long study periods and asking participants not to move their heads as much as possible, the instructional videos used in the study were about 4 min, which might affect the experimental results. We suggest that future studies should properly extend the duration of the instructional videos and use the more advanced EEG headset. Third, our study mainly focused on the four indicators (intrinsic motivation, learning engagement, learning outcomes, and cognitive load) that may not include others equally important for studying deep learning. Future researchers should use more accurate multimodal data analysis methods to explore the occurrence of deep learning and evaluate it from multi-dimensional indicators, such as participants' emotions, behavior, presence, and social interactions.

The results of this study could provide some implications for instructors and learning materials developers. On the one hand, when instructors or developers design short instructional videos for college students, suggesting that using the textual cues to attract college students' attention engages them in the cognitive processing deeply and facilitates their learning outcomes. On the other hand, when instructors or developers consider combining the cues and self-explanation prompts in a video learning setting, they could consider applying the combination of text-cued and reflection prompts (TC-RP) or the combination of visual-cued and prediction prompts (VC-PP) to foster students' video learning engagement and outcomes. In addition, we recommend applying the proper cognitive tools or strategies in the video learning setting to motivate students to engage in learning more deeply and decrease their cognitive load.

Acknowledgements

This study was supported by the Jiangsu Social Science Foundation Youth Project (20JYC002) and the National Natural Science Foundation of China (62077030).

Data Availability

The authors declare that data associated with this paper will be made available upon reasonable request.

Declarations

Conflict of interest

The authors declare that there is no conflict of interest.

Informed consent

All the participants of this experiment were informed of the objectives of the same. All data have been anonymised, guaranteeing the privacy of the participants.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References 1 Aguiar-Castillo L, Clavijo-Rodriguez A, Hernández-López L, De Saa-Pérez P, Pérez-Jiménez R. Gamification and deep learning approaches in higher education. Journal of Hospitality, Leisure, Sport & Tourism Education. 2021; 29: 100290. 10.1016/j.jhlste.2020.100290 2 Akyol Z, Garrison DR. Assessing metacognition in an online community of inquiry. The Internet and Higher Education. 2011; 14; 3: 183-190. 10.1016/j.iheduc.2011.01.005 3 Arslan I. Examining the effects of cueing and prior knowledge on learning, mental effort, and study time in a complex animation (Doctoral dissertation). 2013; Texas Tech University 4 Arslan-Ari I, Crooks SM, Ari F. How much cueing Is Needed in instructional animations? The role of prior knowledge. Journal of Science Education and Technology. 2020; 29; 5: 666-676. 10.1007/s10956-020-09845-5 5 Atkinson RK, Renkl A, Merrill MM. Transitioning from studying examples to solving problems: Effects of self-explanation prompts and fading worked-out steps. Journal of Educational Psychology. 2003; 95; 4: 774-783. 10.1037/0022-0663.95.4.774 6 Baceviciute S, Terkildsen T, Makransky G. Remediating learning from non-immersive to immersive media: Using EEG to investigate the effects of environmental embeddedness on reading in Virtual Reality. Computers & Education. 2021; 164; 4: 104122. 10.1016/j.compedu.2020.104122 7 Baeten M, Kyndt E, Struyven K, Dochy F. Using student-centred learning environments to stimulate deep approaches to learning: Factors encouraging or discouraging their effectiveness. Educational Research Review. 2010; 5; 3: 243-260. 10.1016/j.edurev.2010.06.001 8 Bakeman R, Gottman JM. Observing interaction: An introduction to sequential analysis. 1997; Cambridge University Press. 10.1017/CBO9780511527685 9 Bayraktar DM, Bayram S. Effects of cueing and signalling on change blindness in multimedia learning environment. World Journal on Educational Technology: Current Issues. 2019; 11; 1: 128-139 Biggs J. What do inventories of students' learning processes really measure? A theoretical review and clarification. British Journal of Educational Psychology. 1993; 63; 1: 3-19. 10.1111/j.2044-8279.1993.tb01038.x Bisra K, Liu Q, Nesbit JC, Salimi F, Winne PH. Inducing self-explanation: A meta-analysis. Educational Psychology Review. 2018; 30; 3: 1-23. 10.1007/s10648-018-9434-x Boucheix JM, Lowe RK. An eye tracking comparison of external pointing cues and internal continuous cues in learning with complex animations. Learning & Instruction. 2010; 20; 2: 123-135. 10.1016/j.learninstruc.2009.02.015 Brasel SA, Gips J. Media multitasking: How visual cues affect switching behavior. Computers in Human Behavior. 2017; 77; 12: 258-265. 10.1016/j.chb.2017.08.042 Canham M, Hegarty M. Effects of knowledge and display design on comprehension of complex graphics. Learning and Instruction. 2010; 20; 2: 155-166. 10.1016/j.learninstruc.2009.02.014 Chen CH. Impacts of augmented reality and a digital game on students' science learning with reflection prompts in multimedia learning. Educational Technology Research and Development. 2020; 68; 6: 3057-3076. 10.1007/s11423-020-09834-w Chen IS. Computer self-efficacy, learning performance, and the mediating role of learning engagement. Computers in Human Behavior. 2017; 72; 7: 362-370. 10.1016/j.chb.2017.02.059 Chen W, Allen C, Jonassen D. Deeper learning in collaborative concept mapping: A mixed methods study of conflict resolution. Computers in Human Behavior. 2018; 87; 10: 424-435. 10.1016/j.chb.2018.01.007 Chi MTHGlaser R. Self-explaining expository tests: The dual process of generating inferences and repairing mental models. Advances in instructional psychology. 2000; Lawrence Erlbaum Associates: 161-238 Chi MT, Bassok M, Lewis MW, Reimann P, Glaser R. Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science. 1989; 13; 2: 145-182. 10.1016/0364-0213(89)90002-5 Chi MT, De Leeuw N, Chiu MH, LaVancher C. Eliciting self-explanations improves understanding. Cognitive Science. 1994; 18; 3: 439-477. 10.1016/0364-0213(94)90016-7 De Koning B, Tabbers H, Rikers R, Paas F. Towards a framework for attention cueing in instructional animations: Guidelines for research and design. Educational Psychology Review. 2009; 21; 2: 113-140. 10.1007/s10648-009-9098-7 De Koning B, Tabbers HK, Rikers RMJP, Paas F. Learning by generating vs. receiving instructional explanations: Two approaches to enhance attention cueing animations. Computers & Education. 2010; 55; 2: 681-691. 10.1016/j.compedu.2010.02.027 De Koning B, Tabbers HK, Rikers RMJP, Paas F. Attention cueing in an animation: The role of presentation speed. Computers in Human Behavior. 2011; 27; 1: 41-45. 10.1016/j.chb.2010.05.010 Dubovi I. Cognitive and emotional engagement while learning with VR: The perspective of multimodal methodology. Computers & Education. 2022; 183: 104495. 10.1016/j.compedu.2022.104495 Filius RM, Kleijn R, Uijl SG, Prins FJ, Van R, Grobbee DE. Strengthening dialogic peer feedback aiming for deep learning in spocs. Computers & Education. 2018; 125; 10: 86-100. 10.1016/j.compedu.2018.06.004 Fonseca BA, Chi MTMayer R, Alexander P. The self-explanation effect: A constructive learning activity. Handbook of research on learning and instruction. 2011; Routeledge Press: 270-321 Gerjets P, Scheiter K, Catrambone R. Can learning from molar and modular worked examples be enhanced by providing instructional explanations and prompting self-explanations?. Learning & Instruction. 2006; 16; 2: 104-121. 10.1016/j.learninstruc.2006.02.007 Grover S, Pea R, Cooper S. Designing for deeper learning in a blended computer science course for middle school students. Computer Science Education. 2015; 25; 2: 199-237. 10.1080/08993408.2015.1033142 Hajian S, Jain M, Liu AL, Obaid T, Fukuda M, Winne PH, Nesbit JC. Enhancing scientific discovery learning by just-in-time prompts in a simulation-assisted inquiry environment. European Journal of Educational Research. 2021; 10; 1: 989-1007. 10.12973/eu-jer.10.2.989 Hart SG, Staveland LE. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Advances in Psychology. 1988; North-Holland: 139-183 Hegarty M, Kriz S, Cate C. The roles of mental animations and external animations in understanding mechanical systems. Cognition and Instruction. 2003; 21; 4: 209-249. 10.1207/s1532690xci2104_1 Hou HT, Sung YT, Chang KE. Exploring the behavioral patterns of an online knowledge-sharing discussion activity among teachers with problem-solving strategy. Teaching and Teacher Education. 2009; 25; 1: 101-108. 10.1016/j.tate.2008.07.006 Hung IC, Yang XJ, Fang WC, Hwang GJ, Chen NS. A context-aware video prompt approach to improving students' in-field reflection levels. Computers & Education. 2014; 70: 80-91. 10.1016/j.compedu.2013.08.007 Johnson AM, Ozogul G, Reisslein M. Supporting multimedia learning with visual signaling and animated pedagogical agent: Moderating effects of prior knowledge. Journal of Computer Assisted Learning. 2015; 31; 2: 97-115. 10.1111/jcal.12078 Juliane R, Katharina S, Alexander A. Signaling text-picture relations in multimedia learning: A comprehensive meta-analysis. Educational Research Review. 2016; 17; 2: 19-36 Kaakinen JK. What can eye movements tell us about visual perception processes in classroom contexts? Commentary on a special issue. Educational Psychology Review. 2021; 33; 1: 169-179. 10.1007/s10648-020-09573-7 Koszalka TA, Pavlov Y, Wu Y. The informed use of pre-work activities in collaborative asynchronous online discussions: The exploration of idea exchange, content focus, and deep learning. Computers & Education. 2021; 161: 104067. 10.1016/j.compedu.2020.104067 Kriz S, Hegarty M. Top-down and bottom-up influences on learning from animations. International Journal of Human-Computer Studies. 2007; 65; 11: 911-930. 10.1016/j.ijhcs.2007.06.005 Lei H, Cui Y, Zhou W. Relationships between student engagement and academic achievement: A meta-analysis. Social Behavior and Personality: An International Journal. 2018; 46; 3: 517-528. 10.2224/sbp.7054 Lin L, Atkinson RK. Using animations and visual cueing to support learning of scientific concepts and processes. Computers & Education. 2011; 56; 3: 650-658. 10.1016/j.compedu.2010.10.007 Lin L, Atkinson RK, Savenye WC, Nelson BC. Effects of visual cues and self-explanation prompts: Empirical evidence in a multimedia environment. Interactive Learning Environments. 2016; 24; 4: 799-813. 10.1080/10494820.2014.924531 Liu Z, Zhang Y, Zhou P. Research on the influence of self-efficacy on learning outcomes in desktop virtual reality environment: The mediating based on flow experience. Journal of Distance Education (in Chinese). 2022; 4: 55-64. 10.15881/j.cnki.cn33-1304/g4.2022.04.005 Lu K, Pang F, Shadiev R. Understanding the mediating effect of learning approach between learning factors and higher order thinking skills in collaborative inquiry-based learning. Educational Technology Research and Development. 2021; 69; 5: 2475-2492. 10.1007/s11423-021-10025-4 Marton F, Säljö R. On qualitative differences in learning: I—Outcome and process. British Journal of Educational Psychology. 1976; 46: 4-11. 10.1111/j.2044-8279.1976.tb02980.x Mautone PD, Mayer RE. Signaling as a cognitive guide in multimedia learning. Journal of Educational Psychology. 2001; 93; 2: 377-329. 10.1037/0022-0663.93.2.377 Mayer RE. Multimedia learning. 20092; Cambridge University Press. 10.1017/CBO9780511811678 Mayer RE. Cognitive theory of multimedia learning. The Cambridge handbook of multimedia learning. 20142; Cambridge University Press: 43-71. 10.1017/CBO9781139547369.005 Merrill, M. D. (2012). Instructional transaction theory: An instructional design model based on knowledge objects. Instructional Design: International Perspectives: Volume I: Theory, Research, and Models: volume Ii: Solving Instructional Design Problems, 381. Miller BW. Using reading times and eye movements to measure cognitive engagement. Educational Psychologist. 2015; 50; 1: 31-42. 10.1080/00461520.2015.1004068 Miller RE, Strickland C, Fogerty D. Multimodal recognition of interrupted speech: Benefit from text and visual speech cues. The Journal of the Acoustical Society of America. 2018; 144; 3: 1800-1800. 10.1121/1.5067942 Moreno R, Mayer R. Interactive multimodal learning environments. Educational Psychology Review. 2007; 19; 3: 309-326. 10.1007/s10648-007-9047-2 Moreno R, Mayer RPlass J, Moreno R, Brünken R. Techniques that increase generative processing in multimedia learning: Open questions for cognitive load research. Cognitive load theory. 2010; Cambridge University Press: 153-177. 10.1017/CBO9780511844744.010 Nokes TJ, Hausmann RG, VanLehn K, Gershman S. Testing the instructional fit hypothesis: The case of self-explanation prompts. Instructional Science. 2011; 39; 5: 645-666. 10.1007/s11251-010-9151-4 Offir B, Lev Y, Bezalel R. Surface and deep learning processes in distance education: Synchronous versus asynchronous systems. Computers & Education. 2008; 51; 3: 1172-1183. 10.1016/j.compedu.2007.10.009 Ozcelik E, Karakus T, Kursun E, Cagiltay K. An eye-tracking study of how color coding affects multimedia learning. Computers & Education. 2009; 53; 2: 445-453. 10.1016/j.compedu.2009.03.002 Park J, Park C, Jung H, Kim D. Promoting case indexing in case library learning: Effects of indexing prompts on self-explanation and problem solving. Journal of Computer Assisted Learning. 2020; 36; 5: 656-671. 10.1111/jcal.12435 Plass JL, Heidig S, Hayward EO, Homer BD, Um E. Emotional design in multimedia learning: Effects of shape and color on affect and learning. Learning and Instruction. 2014; 29: 128-140. 10.1016/j.learninstruc.2013.02.006 Ponce HR, Mayer RE. An eye movement analysis of highlighting and graphic organizer study aids for learning from expository text. Computers in Human Behavior. 2014; 41: 21-32. 10.1016/j.chb.2014.09.010 Roy M, Chi MTHMayer RE. The self-explanation principle in multimedia learning. The Cambridge handbook of multimedia learning. 2005; Cambridge University Press: 271-286. 10.1017/CBO9780511816819.018 Ryan RM. Control and information in the intrapersonal sphere: An extension of cognitive evaluation theory. Journal of Personality and Social Psychology. 1982; 43; 3: 450-461. 10.1037/0022-3514.43.3.450 Schneider S, Beege M, Nebel S, Rey GD. A meta-analysis of how signaling affects learning with media. Educational Research Review. 2018; 23: 1-24. 10.1016/j.edurev.2017.11.001 Stark L, Brünken R, Park B. Emotional text design in multimedia learning: A mixed-methods study using eye tracking. Computers & Education. 2018; 120; 5: 185-196. 10.1016/j.compedu.2018.02.003 van der Meij J, de Jong T. The effects of directive self-explanation prompts to support active processing of multiple representations in a simulation-based learning environment. Journal of Computer Assisted Learning. 2011; 27; 5: 411-423. 10.1111/j.1365-2729.2011.00411.x Vos N, Van Der Meijden H, Denessen E. Effects of constructing versus playing an educational game on student motivation and deep learning strategy use. Computers & Education. 2011; 56; 1: 127-137. 10.1016/j.compedu.2010.08.013 Wang CR, Xu PP, Hu Y. Impact of desktop VR learning environment on learning engagement and performance: Evidence based on multimodal data. Open Education Research (in Chinese). 2021; 3: 112-120. 10.13966/j.cnki.kfjyyj.2021.03.012 Wang M, Derry S, Ge X. Guest editorial: Fostering deep learning in problem-solving contexts with the support of technology. Educational Technology & Society. 2017; 20; 4: 162-165 Wang X, Lin L, Han M, Spector JM. Impacts of cues on learning: Using eye-tracking technologies to examine the functions and designs of added cues in short instructional videos. Computers in Human Behavior. 2020; 107: 106279. 10.1016/j.chb.2020.106279 Wang X, Wang ZJ, Fu TT, Li XN. The eye movement study on the design of text clues in multimedia courseware. China Educational Technology. 2015; 5: 99-104 Wang Z, Adesope O. Do focused self-explanation prompts overcome seductive details? A multimedia study. Journal of Educational Technology & Society. 2017; 20; 4: 162-165 Xie H, Mayer RE, Wang F, Zhou Z. Coordinating visual and auditory cueing in multimedia learning. Journal of Educational Psychology. 2019; 111; 2: 235. 10.1037/edu0000285 Yang HY. The effects of attention cueing on visualizers' multimedia learning. Journal of Educational Technology & Society. 2016; 19; 1: 249-262 Yang XZ, Lin L, Cheng PY, Xue Y, Ren Y, Huang YM. Examining creativity through a virtual reality support system. Educational Technology Research and Development. 2018; 66; 5: 1231-1254. 10.1007/s11423-018-9604-z Yeh YF, Chen MC, Hung PH, Hwang GJ. Optimal self-explanation prompt design in dynamic multi-representational learning environments. Computers & Education. 2010; 54; 4: 1089-1100. 10.1016/j.compedu.2009.10.013 Yue J, Tian F, Chao KM, Shah N, Li L, Chen Y, Zheng Q. Recognizing multidimensional engagement of E-learners based on multi-channel data in E-learning environment. IEEE Access. 2019; 7: 149554-149567. 10.1109/ACCESS.2019.2947091 Yung HI, Paas F. Effects of cueing by a pedagogical agent in an instructional animation: A cognitive load approach. Journal of Educational Technology & Society. 2015; 18; 3: 153-160 Zhu F, Yang J, Pi Z. The interaction effects of an instructor's emotions in instructional videos and students' emotional intelligence on L2 vocabulary learning. Educational Technology Research and Development. 2022; 70; 5: 1695-1718. 10.1007/s11423-022-10148-2

By Xudong Zheng; Yunfei Ma; Tingyan Yue and Xianmin Yang

Reported by Author; Author; Author; Author

Xudong Zheng is an associate professor of Jiangsu Engineering Technology Research Center of ICT in Education at Jiangsu Normal University in Xuzhou, China. His research focuses on educational technology, learning sciences, curriculum and instruction.

Yunfei Ma is an M.S. student of Wisdom Education Research Center at Jiangsu Normal University in Xuzhou, China. Her research focuses on educational technology, learning sciences.

Tingyan Yue is a research assistant of School of Educational Science at Jiangsu Normal University in Xuzhou, China. Her research focuses on educational technology, learning sciences, curriculum and instruction.

Xianmin Yang is a professor of Jiangsu Engineering Technology Research Center of ICT in Education at Jiangsu Normal University in Xuzhou, China. His research focuses on educational technology, big data analysis of Education.

Titel:
Effects of Different Types of Cues and Self-Explanation Prompts in Instructional Videos on Deep Learning: Evidence from Multiple Data Analysis
Autor/in / Beteiligte Person: Zheng, Xudong ; Ma, Yunfei ; Yue, Tingyan ; Yang, Xianmin
Link:
Zeitschrift: Educational Technology Research and Development, Jg. 71 (2023-06-01), Heft 3, S. 807-831
Veröffentlichung: 2023
Medientyp: academicJournal
ISSN: 1042-1629 (print) ; 1556-6501 (electronic)
DOI: 10.1007/s11423-023-10188-2
Schlagwort:
  • Descriptors: Cues Reflection Prompting Video Technology Instructional Materials Educational Technology Learning Processes Learning Motivation Learner Engagement Difficulty Level College Students Attention Eye Movements Outcomes of Education
Sonstiges:
  • Nachgewiesen in: ERIC
  • Sprachen: English
  • Language: English
  • Peer Reviewed: Y
  • Page Count: 25
  • Document Type: Journal Articles ; Reports - Research
  • Education Level: Higher Education ; Postsecondary Education
  • Abstractor: As Provided
  • Entry Date: 2023

Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

oder
oder

Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

oder
oder

Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

xs 0 - 576
sm 576 - 768
md 768 - 992
lg 992 - 1200
xl 1200 - 1366
xxl 1366 -