Lecture capture technologies (LCT) such as Echo360, Mediasite, and Tegrity have become very popular in recent years. Many studies have shown that students favor the use of such technology, but relatively little research has studied the impact of LCT on learning. This article examines two research questions: (
Lecture Capture; Instructional Technology; Student Performance
Advances in technology and changes in student demographics have led to significant changes in the college classroom experience over the last decade. Once dominated by lectures, chalkboards, and overhead transparencies, today's college classes have been transformed into rich multimedia experiences involving clickers, web sites, podcasts, and YouTube videos. One significant development in the last few years is the emergence of lecture capture technology (LCT). Platforms such as Accordent, Echo360, Mediasite, Panopto, and Tegrity capture everything that happens during a class session—video, audio, slides, virtual whiteboard, etc.—and make the materials available for remote use, usually via a web connection. While the use of such technology has increased dramatically in recent years, its impact on student learning is not yet fully understood. This article addresses two questions related to the use of lecture capture technology. First, is the use of LCT associated with improved learning outcomes? Second, assuming that it has a positive impact, how can instructors encourage their students to make more and better use of LCT?
Our university began using Echo360 lecture capture technology in 2010 and has installed the necessary hardware and software in many classrooms across the campus. A small but growing number of faculty and students have embraced the technology, while others remain skeptical. This skepticism mirrors that of using the Internet as an alternative delivery medium in the late 1990's. Concerns about the technology range from its impact on student class attendance to the ownership of course content. After using Echo360 for about 1 year, opinions of the technology were generally favorable, but there was also a sense that it was not being fully utilized by students. As a result, we were motivated to collect data related to student LCT usage and to explore the relationship between usage and learning outcomes.
While many papers (some of which are discussed below) discuss the pros and cons of LCT or describe particular case studies involving LCT, the goal of this study is to contribute to the understanding of LCT use and best practices using a data‐driven approach.
Many researchers have explored the use of new instructional technologies such as podcasts, clickers, and microblogs, to name only a few. For general discussions of technology in the classroom, refer to the recent reviews by Kirkwood and Price (2014) and Fu ([
The research related to LCT can be divided into four categories. The first category addresses the “how‐to” of lecture capture, for example, ways that LCT can and should be used. Such research includes discussions of best practices (Zhu & Bergom [
The second, and by far largest, category of research includes papers that report on early experiences with lecture capture in particular institutions or programs. These case studies often include assessments of student and faculty perceptions (e.g., do they use it and like it?) and the impact of LCT on class attendance. Karnad ([
Davis, Connolly, and Linfield ([
Toppin ([
The third category of research relates to students’ learning styles. Few papers in this category explicitly examine connections between what learning theories predict and what students actually do. A notable exception is the work by Dey et al. ([
Vajoczki, Watt, Marquis, Vine, and Liao ([
Other papers in this third category touch on teaching and learning styles indirectly by focusing student usage patterns. Based on a large survey of Australian students, Preston et al. ([
The fourth category of research attempts to link LCT use to objective measures of learning. Some studies focus on the differences between traditional face‐to‐face lectures and other delivery methods such as viewing previously recorded lectures. For example, Euzent, Martin, Moskal, and Moskal ([
Other studies in this category explore how the availability of LCT as a supplement to traditional classes can affect performance. Owston, Lupshenyuk, and Wideman ([
Ford, Burns, Mitch, and Gomez ([
Stroup, Pickard, and Kahler ([
While the three studies reviewed above offer valuable insights, their reliance on student self‐reported data may cast some doubt on the results. To address this issue, some researchers have devised ways to measure student access to LCT in more objective ways. For example, von Konsky, Ivins, and Gribble ([
Bollmeier, Wenger, and Forinash ([
Shaw and Molnar ([
In summary, there has been a significant amount of research on lecture capture technology in recent years. Some effects of LCT have been undoubtedly positive, however, negative consequences have also been observed. Tables [NaN] and [NaN] list some of the advantages and disadvantages, respectively, of the use of LCT. While student satisfaction with LCT is generally high, an important question that remains unresolved is how does LCT use relate to learning outcomes? Although several studies have examined this question, the results have been varied. The mixed results may be related to factors including discipline (e.g., engineering vs. psychology), type of lecture capture (e.g., including video of instructor or not), level of analysis (e.g., exam grade vs. course grade), level of student (e.g., undergraduate vs. graduate), the nature of the material and delivery (e.g., one‐way transfer of information vs. interactive problem solving), or whether the lecture capture was a supplement to or replacement for lectures. Determining the precise causal factors from such a diverse set of studies is beyond the scope of this article. Our study attempts to contribute to the understanding of this topic by exploring the connections between lecture capture use and student performance in a business course.
Some advantages of lecture capture
Advantage References Increases student satisfaction Toppin (31), Nashash and Gunn (21) Students like and expect new technology Folley (10) Facilitates review of material, especially in preparation for exams Vajoczki et al. (33), Cooke et al. (3), Dibacco et al. (7) Students report higher engagement and improved learning Preston et al. (25), Pons et al. (24), Drouin (8) Increases accessibility, e.g., for those with disabilities or nonnative English speakers Vajoczki et al. (33), Shaw and Molnar (27) Reduces routine informational inquiries to instructor Bollmeier et al. (1), Rogers and Cordell (26) Provides flexibility for students who must miss class (e.g., student athletes) DeSantis et al. (5) Allows students to review their own in‐class presentations Smith and Sodano (28)
Some disadvantages of lecture capture
Disadvantage References May reduce class attendance Preston et al. (25), Vajoczki et al. (33), Drouin (8) May enable nonparticipation Stroup et al. (29), Drouin (8) Potential hardware and software costs to institution “Lecture Capture Yet to Take Hold” (2011), Newton et al. (22) Learning curve for instructors Davis et al. (4), “Lecture Capture Yet to Take Hold” (17), Newton et al. (22) Uncertainty about intellectual property rights (i.e., who “owns” the lecture content?). “Lecture Capture Yet to Take Hold” (2011), Newton et al. (22)
The use of lecture capture materials and student performance was investigated in two sections of an undergraduate Operations Management course at a mid‐size, public university. Both sections had the same instructor, and there were 35 students in each section (for a total of 70 students). The Operations Management course is required for all business majors, and students typically take it in the third or fourth year of the program. The course is divided into three modules, each covering three chapters from the textbook. An exam is given after each module, and the modules are not cumulative. Toward the beginning of the semester, the instructor informed students that all lectures were being recorded and demonstrated how the lectures could be accessed. To get a sense of students’ baseline behavior, access to lecture capture materials was examined beginning after the first exam. This is typically the time of the semester when there is renewed interest in academic performance (i.e., students are beginning to express concern about their grades).
Once scheduled by the Instructional Technology department, each class session was captured automatically and uploaded to a server. Links to the sessions were placed in a specific area of the course web site, which uses the Blackboard learning management system. In addition, lecture notes were generated for each lecture. The notes contained all of the material written on a virtual white board during class sessions, including announcements and example problems solved in class. The notes were manually uploaded to the course web site. Blackboard is robust in that it can track many data points, including how many times a class resource is accessed and for how long.
As mentioned previously, two central questions motivated this project:
Does the use of lecture capture materials improve students’ learning?
Assuming that the answer to the above question is “yes,” how can instructors increase the use of lecture capture materials by students?
To address the first question, data were collected regarding students’ use of lecture capture videos and their performance on exams. Since links to the lecture capture videos were located in a specific area of Blackboard, the number of times each student accessed the lecture capture folder and clicked on one of the links was recorded, and the association between students’ access patterns and exam scores examined.
To address the second question, different methods were used to raise students’ awareness of lecture capture materials. The first method required students to watch a previously‐recorded lecture capture video in lieu of coming to class. The second method was the use of extra credit quizzes based on particular lectures. Three quizzes were given between the second and third exams. The first quiz was based on the required lecture capture viewing. Each quiz had three questions each worth 1 point on the third exam. Thus, a student could earn up to 9 extra credit points on the third exam (which would raise their final course grade by nearly 2 points out of 100). Quizzes were administered online and made available for approximately 30 hours (each). Once begun, students were allowed 9 minutes to complete each quiz.
The third method to increase the use of lecture capture materials was promotion by the instructor. One of the two sections was designated as the “strong promotion” section. In this section, the instructor reminded the class two or three times about the extra credit quizzes, and emphasized the connection between specific lecture capture videos and quiz content. The other section was designated as the “weak promotion” section. In this section, the instructor reminded students once about quizzes, but did not tell them about the explicit connection with lecture captures. An important part of “promotion” is thus not only the number of reminders but the emphasis on the linkage between quizzes and specific parts of the captured lectures.
The following hypotheses regarding the use of lecture capture materials are proposed
H1. More lecture capture access will be associated with better learning outcomes.
H2. Efforts to promote lecture capture use are positively associated with student usage.
Below we report how data were collected and analyzed to test these hypotheses.
The first question considered is how use of lecture capture materials affects learning outcomes. Using linear regression, the association between lecture capture video access between Exam 1 and Exam 2 (independent variable) and Exam 2 performance (dependent variable) was examined. One could argue that “good” students may be inclined to make better use of class resources than “poor” students, thus a connection between lecture capture video views and exam performance is not causal. One could also argue that students who do not do well on the first exam will work harder on subsequent exams. To control for these effects, performance on the previous exam was included as an independent variable. The section—strong or weak promotion—was also included as an independent variable. The model is significant, with p <.001, and a value of multiple R
Regression results for Exam 2 model
Variable Coefficient Std. Error t‐Statistic p Value Intercept 25.99 5.92 4.39 4.18e‐050001 Exam 1 .65 .07 8.76 1.17e‐120001 Lecture views .30 .13 2.24 .0280001 Section weak 4.31 1.88 2.29 .0250001
- 5 Note
- 6 ***p <.001, **p <.01, *p <.05, +p <.1, ns p ≥.1.
These results indicate that, as expected, Exam 1 is a good predictor of performance on Exam 2. However, the number of lecture views is also statistically significant (α =.05). Although not expected, the weak promotion section is also associated with higher Exam 2 scores. This result is surprising given that no promotion occurred during this block of classes; the section variable was is included only to facilitate comparison with the model covering the next block of classes.
A similar regression was performed for the block of classes between Exam 2 and Exam 3. In this model, Exam 3 is the dependent variable, and the independent variables are Exam 2, lecture views (between Exam 2 and Exam 3), and section. This model is also significant, with p <.001 and a value of multiple R
Regression results for Exam 3 model
Variable Coefficient Std. Error t‐Statistic p Value Intercept 25.74 9.09 2.83 .0060001 Exam 2 .58 .11 5.25 1.75e‐060001 Lecture views .21 .13 1.69 .0970001 Section weak −3.21 2.61 −1.23 .222 ns
- 7 Note
- 8 ***p <.001, **p <.01, *p <.05, +p <.1, ns p ≥.1.
These results again indicate a strong association between performance on prior and current exams. Even accounting for this effect, however, the number of lecture capture video views also had a significant effect on Exam 3 performance (α =.10). Interestingly, the section variable was not statistically significant in this model even though the model covered the block of classes in which promotion occurred. The negative coefficient of the section variable is as expected: in other words, the weak promotion section is associated with lower Exam 3 scores. Collectively, these results provide support for hypothesis H1: more lecture capture access is associated with better learning outcomes.
To examine how efforts to promote lecture capture use affects student usage, access patterns in the period between the first and second exam were examined. This block of classes is referred to as block A. There were eight class sessions within this block but one session was held in the computer lab with no lecture capture made, thus seven total sessions were recorded. Blackboard automatically tracks the usage of all content on the course web site, so it was possible to determine how many times a particular lecture capture session was accessed and for how long. It was also possible to determine how many times the corresponding lecture notes were accessed. For each lecture, access to the materials for a period of one week (beginning the day of the lecture) was examined. Table [NaN] reports the number of times the lecture capture links were accessed, how long the access lasted, and the number of times the lecture notes were accessed during class block A.
Summary of lecture capture access during class block A
Lecture Capture Average View Lecture Class session Link access Time (min:sec) Note views A1 0 N/A 14 A2 8 17:50 16 A3 4 00:17 10 A4 3 00:35 2 A5 10 09:50 23 A6 14 10:35 64 A7 7 24:15 30 Average 6.6 09:03 22.7
These numbers should be interpreted with care. For example, simply accessing the lecture capture link does not necessarily mean that a student watched the video. Also, the item‐level tracking mechanism does not indicate who is accessing the link; some students may access the links multiple times. One might assume that access would increase after a lecture covering particularly challenging material or after a lecture with low attendance. Access did become more frequent and average viewing time became longer as the end of a module approached, potentially indicating students’ concerns about the need to review the material prior to an exam.
Access patterns during the period between the second and third exams were also examined. This block of classes is referred to as block B, and corresponded to the time period during which extra credit quizzes were given. In lieu of attending the second class session in block B, students were asked to view a previously recorded lecture, and the first extra credit quiz given based on this material. Table [NaN] reports the number of times the lecture capture links were accessed, how long the access lasted, and the number of times the lecture notes were accessed. Statistics cover a one‐week time frame, beginning on the day the lecture was given.
Summary of lecture capture access during class block B
Lecture Capture Average View Lecture Class session Link access Time (min:sec) Note views B1 10 13:25 8 Bx 63 18:20 53 B2 10 20:08 20 B3 6 04:14 22 B4 11 21:38 24 B5 0 N/A 44 B6 15 03:43 51 B7 6 27:12 57 Average 8.3 15:03 32.3
- 9 Note
- 10 Students were required to view class Bx, so the values for this class are not included in the averages.
- 11 Indicates lectures upon which extra credit quizzes were based.
Note that the averages reported in the table do not include the values for the required session. Again, the numbers must be interpreted with care, but several interesting patterns are evident. First, a large majority of students—63 of 70—accessed the required class (labeled “Bx” in the table). Second, the lecture videos associated with extra credit quizzes were accessed more than the nonquiz‐related lectures. Third, there appears to be a marked increase in views of the lecture notes after class Bx. These results suggest support for hypothesis H2.
In addition to the class‐session‐level analysis, student‐level changes in access to the lecture capture materials can shed light on hypothesis H2. The tracking mechanism on Blackboard makes it possible to see when a student enters the lecture capture links area and accesses one of the links. However, since all of the lecture links are in the same area, it is not possible to tell which of the links were accessed. For example, over a one‐week time frame, a student may access multiple lectures. In addition, the lecture notes are located in folders with other materials for a given week (e.g., handouts, example problems, etc.). Nevertheless, student‐level data regarding lecture capture video and class notes access should enable an examination of changes over time. To be consistent, video or file access during the one‐week period in which students were required to view a lecture was not recorded. Using a standard one tailed t test, differences in video access and file access during class block A and class block B were examined, the hypothesis being that both video and file access will be greater in block B. Results are reported in Table [NaN] .
Comparison of video and file access between class blocks A and B—both sections
Video Access File Access Block A Block B Block A Block B Mean 6.63 8.46 28.57 44.26 Variance 50.44 108.19 1320.42 2109.36 Observations 70 70 70 70 Pearson correlation .58 .70 Hypothesized mean difference 0 0 Degrees of freedom 69 69 t Statistic −1.79 −3.99 P(T ≤ t) one‐tail .03860001 7.92e‐050001 t Critical one‐tail 1.67 1.67
- 12 Note
- 13 ***p <.001, **p <.01, *p <.05, +p <.1, ns p ≥.1.
These results indicate that there is a significant increase in lecture capture video access and file access during class block B. Again, note that this increase does not include the required access to class Bx. (The small difference in mean video access compared to the numbers reported in Tables [NaN] and [NaN] is due to the fact that in this analysis data is tracked by student rather than by item.) The student‐level analysis thus also provides support for hypothesis H2.
What explains the increase in lecture capture video and file access—the required lecture capture session, the extra‐credit quizzes, or the promotion? To explore the impact of being in the weak or strong promotion section, additional tests were performed to determine whether the difference in the mean number of video and file accesses was significantly different with respect to the sections. Since different students were in each section, a two‐sample t test was used. Examination of class block A showed that the mean number of video and file accesses was not significantly different when comparing the weak and strong promotion sections (p values greater than.1 for both video and file access). This result is as expected, since no promotional efforts were made during class block A. There was however a statistically significant increase in video link access for the strong promotion section between class blocks A and B. File access was also higher for the strong promotion section; however, the difference is not statistically significant.
The results of the regression models indicated a positive association between access to lecture capture videos and performance on exams, thus hypothesis H1 is supported. The fact that performance on previous exams was controlled for increases confidence in the result. Interestingly, the association between lecture views and exam performance is stronger in class block A (before the required lecture view and optional extra credit quizzes) than in class block B.
The analysis reported in Tables [NaN] to [NaN] indicates that access to lecture capture materials does indeed increase during class block B. Requiring a lecture capture view and offering optional extra credit quizzes increased student usage of the lecture capture material, thus hypothesis H2 is also supported by the analysis. It is interesting to note that the increase in file access is larger than the increase in video access between class blocks A and B. This suggests that once students realized that lecture notes were available, they made more use of the notes. Simply telling and showing students about the availability of lecture material was not as effective as requiring them to view a lecture video. The assignment seems to have made students more aware of the lecture capture materials. A caveat however is that file access numbers reported in Table [NaN] included access to all files—not just lecture capture notes. Due to the way content was organized in Blackboard, it was not possible to determine access to the lecture capture notes on a student‐level basis. However, the item‐level numbers reported in Tables [NaN] and [NaN] refer specifically to the lecture capture notes.
The results reported in Table [NaN] indicate that the type of promotion used matters. The optional extra‐credit quizzes, as well as the additional promotion of lecture capture use, began in class block B, after the required lecture capture view assignment. Both the weak and strong promotion sections were informed about the extra‐credit quizzes, but the strong promotion section was reminded two to three times and also told about the explicit connection between lecture capture and the quiz content. Although video link access increased more for the strong promotion section (as compared to the weak promotion section), the difference between sections with respect to the number of file views (including lecture capture notes) was not statistically significant. This suggests that the verbal reminders in the strong promotion section did not have a significant impact. This, in turn, supports that idea that simply telling students about the lecture capture does not translate into increased usage.
Comparison of video and file access between weak and strong promotion sections—class block B
Video Access File Access Weak Strong Weak Strong Mean 6.43 10.49 41.91 46.60 Variance 73.72 137.37 1659.49 2609.95 Observations 35 35 35 35 Hypothesized mean difference 0 0 Degrees of freedom 62 65 t Statistic −1.65 −.4242 P(T ≤ t) one‐tail .0518+ .3364 ns t Critical one‐tail 1.67 1.67
- 15 Note
- 16 ***p <.001, **p <.01, *p <.05, +p <.1, ns p ≥.1.
This article describes the results of a study examining connections between the use of lecture capture materials and student performance. The first question was is there a link between the use of lecture capture materials and learning outcomes? Linear regression showed that there is an association between lecture capture video access and performance on exams even after controlling for performance on previous exams. The second question was how can instructors increase the use of lecture capture materials by students? Three approaches were explored: (
The most insightful result from this study is how much access to lecture capture materials increased after requiring students to view a previously recorded lecture. Doing so encouraged them to “tune in” to the availability of videos and notes, thus, the most effective way to increase students’ awareness about lecture capture materials may be simply to require access as part of an assignment. Once they actually see and engage with what is available, students seem to be more likely to make use of lecture capture resources.
Based on the results of this study, we plan to make at least two changes with respect to LCT use in our classes. First, we plan to require LCT access early in the semester to make students aware of the availability of materials. Merely telling them about it is not enough. Second, we plan to put lecture notes and video links in the same folder on Blackboard. To facilitate tracking for the study, we put the video links in a separate folder, but this may have affected access.
Instructors from many disciplines can benefit from the insights from this study. Nevertheless, the study does have limitations. Item tracking needs to be fine‐tuned to get a more detailed picture of what materials students are looking at and when. While the current study relied solely on objective data, surveying student attitudes, particularly about the timing and weight of extra‐credit quizzes, may be of value. The analysis could also be enriched by examining a broader sample by expanding the number of students, collecting data from other courses at our university, or perhaps collecting data from other universities. As past studies indicate, factors such as discipline, teaching style, and student level may also have an impact on results. Ideally, the analysis would also be carried out with a control group that would not be given any prompting to view captured lectures. These issues highlight some of the challenges of doing empirical research in real time—it is difficult to design a rigorous, scientific study around the constraints of teaching real classes to real students. Additional studies will help students and instructors alike get the most value possible from lecture capture technology.
By Thomas W. Sloan and David A. Lewis
Thomas W. Sloan is an Associate Professor in the Manning School of Business at the University of Massachusetts Lowell. He teaches courses on management science, operations management, and supply chain management. Dr. Sloan's research interests are in the areas of operations management and using technology to enhance learning.
David A. Lewis is a Professor of Operations and Information Systems. He teaches courses on operations management, quality control, international management, and management information systems. Professor Lewis's research interests include total quality control, using technology in the classroom, and distance learning. He has been teaching online courses for 18 years.