Study on retention using Video plus AI-generated retrieval practice
Author: Donald Clark
Go to Source
Abstract
The aim of this trial was to test the effectiveness of chunking video and placing effortful, retrieval practice after each chunk of video. Chunking is the slicing of video content down into several, separate video segments or chuinks, so that there is less cognitive load and forgetting. Retrieval practice is making the learner recall what they think they know to reinforce therefore increasing retention and subsequent recall. Two groups were compared. One was shown only a training video on Equality & Diversity produced for a large company, the other the same video, chunked into smaller segments, with AI generated practice at the end of each short segment. Both groups were tested immediately after the learning experience. The results showed a 61.5% greater score in the Video + AI generated practice group over the Video only group. This study shows that video significantly benefits from enhancement through chunking, reinforcement, retention and recall by adding Video plus AI-generated retrieval practice.
Introduction
Video has become commonplace in learning, through YouTube and Vimeo in both the public domain and on private channels. It has also become common to deliver learning video content from a VLE (Virtual Learning Environment), LMS (Learning Management System) or LXP (Learning eXperience Platform). Other video specific platforms use Netflix-style carousel and other interfaces to deliver learning video content.
Yet little attention has been paid to the research that suggests video should be enhanced with active learning. Research into the use of video for learning recommends several techniques to enhance the watching of video on its own, (Reeves & Nass, 1996; Zhang, 2006; Mayer, 2008: Brame, 2016; Chaohua, 2019).
Method
Twenty-six participants were selected. The first group of thirteen watched the video only. The second group of thirteen watched the same video but chunked down into four meaningful segments, edited to match separate topics, interspersed with AI-generated, retrieval practice. The AI-generated, retrieval practice group required the learner to recall key ideas and concepts and type them in. This involved acts of recall and writing created by the AI tool, that reinforce learning, where the learner was required to recall concepts as well as type in those concepts. Any items that were not correct had to be repeatedly input until all were correct. A separate and identical written, recall test was completed immediately after the learning experience for both groups.
Note that the retrieval practice tool used was WildFire. It creates online learning from the chunks of video, applying AI to the automatically generated,video transcript , using the AI to identify the key learning points, create questions, as well as generate links to external content to enhance the learning experience. If the learner has not been able to retrieve the relevant concepts, it provides remedial practice until that concept is known. On input it accepts spelling variants, as well as British and American English.
Results
The Video + AI group scored significantly higher than the video group.
Figure 1 shows that Video + AI group had a 61.5% increase in mean retention, from a mean value of 9.00 to 14.54.
Figure 2 shows that Video + AI group had a 61.5% increase in mean retention, from a mean value of 9.00 to 14.54.
In Figure 3, histograms of the two groups are compared showing that the Video + AI group has a higher mean and users scored higher more frequently.
In Figure 4, a box and whisker plot gives more insight into the respective distributions. The Video only group had a lower median value of 8 and a smaller range than the Video + AI group. The Video + AI group had a 75% increase in median score over the Video group.
Discussion
We know from (Guo, 2014), from a large data set of learning video data gathered from MOOCs (Massive Open Online Courses), that learners drop out in large numbers at around six minutes. This drops dramatically down to 50% at 9-12 minutes and 20% beyond this. Evidence from other studies on attention, using eye-tracking, confirm this rapid drop in arousal (Risko, 2012). The suggestion is that learning videos should be 6 minutes or less. Chunking video down into smaller and meaningful segments achieves this aim and relieves the load on working memory.
Many can recall scenes from films and videos but far fewer can remember what was actually said. That is because our episodic memory is strong and video appeals to that form of visual memory and recall but video is poor for semanticmemory and semantic knowledge, what we need to know in terms of language. One remembers the scene and can literally play it back in one’s mind but it is more difficult to remember facts and speech. This is why video is not so good at imparting detail and knowledge. There is a big difference between recalling episodes and knowing things.
Learning is a lasting change in long-term memory and video suffers from the lack of opportunity to encode and consolidate memories. Your working memory lasts about 20 seconds and can only hold three or four things in the mind at one time. Without the time to encode, these things can be quickly forgotten through cognitive overload or the failure to consolidate into long-term memory (Sweller, 1988). Our minds move through video at the pace of the narrator but like a shooting star, the memories burn up behind us, as we have not had the opportunity to encode them into long-term memory. Without additional active, effortful learning, we forget. An additional researched problem is that people often ‘think’ or ‘feel’ they have learnt from video but as (Bjork, 2013) and others have shown, this can be ‘illusory’ learning. The learner mistakes the feeling that they have learnt things for actual learning. When tested they are shown to have learned less than they thought they had learned.
How do we reduce cognitive load in video for learning? (Mayer, 2003) and others have shown that text plus audio plus video on the screen, commonly seen in lecture capture, actually inhibits learning. One should not put captions, text or scripts on the screen while the narrator or person on the screen is talking. (Florella 2019) proposes that learning improves when there are “visual rests” and memory is enhanced when ”people have a chance to stop and think about the information presented“. Chunking video down to smaller, meaningful segments and providing the opportunity for active, effortful learning will both enhance learning by reducing cognitive load and increasing reinforcement, retention and recall.
But what exactly should learners do after and between these video chunks? (MacHardy, 2015) shows that the relationship between video and the active learning must be meaningful and closely related. In a large data mining exercise, they showed that if the two are too loosely related, it inhibits student attainment. To increase reinforcement, retention and recall, (Szpunar, 2013; Roediger, 2006; Vural, 2013) suggest that retrieving key concepts is a powerful learning technique. This was the aim of this study, to test the hypothesis that chunked video with video plus AI-generated retrieval practice increases reinforcement, retention and recall.
Practical applications
There are several possible applications of this form of enhanced video learning:
1. Existing video learning libraries can be made into far more effective learning experiences
2. New videos for learning can be made into far more effective learning experiences
Note that additional design recommendations identified during the study include:
1. Scripting the videos into a more ‘chaptered’ structure
2. Clear edit points on visuals and audio at the end of each planned chunk of video
3. Close relationship between the video and the retrieval practice
Conclusion
This trial provides evidence that the use of both chunked videos and AI-generated retrieval practice, in combination, significantly increases retention and recall and can be strongly recommended for both existing and new video learning content.
Bibliography
Bjork, R.A., Dunlosky, J. and Kornell, N., 2013. Self-regulated learning: Beliefs, techniques, and illusions. Annual review of psychology, 64, pp.417-444.
Brame, C.J., 2016. Effective educational videos: Principles and guidelines for maximizing student learning from video content. CBE—Life Sciences Education, 15(4), p.es6.
Chaohua, O, Joyner, D, Goel, A., 2019. Developing Videos for Online Learning: A 7-Principle Model. Online Learning
Fiorella, L., van Gog, T., Hoogerheide, V. and Mayer, R.E., 2017. It’s all a matter of perspective: Viewing first-person video modeling examples promotes learning of an assembly task. Journal of Educational Psychology, 109(5), p.653.
Fiorella, L., Stull, A.T., Kuhlmann, S. and Mayer, R.E., 2019. Fostering generative learning from video lessons: Benefits of instructor-generated drawings and learner-generated explanations. Journal of Educational Psychology.
Guo PJ, Kim J, Robin R. L@S’14 Proceedings of the First ACM Conference on Learning at Scale.New York: ACM; 2014. How video production affects student engagement: an empirical study of MOOC videos; pp. 41–50.
MacHardy Z, Pardos ZA., 2015 Evaluating the relevance of educational videos using BKT and big data. In: Santos OC, Boticario JG, Romero C, Pechenizkiy M, Merceron A, Mitros P, Luna JM, Mihaescu C, Moreno P, Hershkovitz A, Ventura S, Desmarais M, editors. Proceedings of the 8th International Conference on Educational Data Mining, Madrid, Spain.
Mayer, R.E. and Moreno, R., 2003. Nine ways to reduce cognitive load in multimedia learning. Educational psychologist, 38(1), pp.43-52.
Mayer, R.E., 2008. Applying the science of learning: Evidence-based principles for the design of multimedia instruction. American psychologist, 63(8), p.760.
Reeves, B. and Nass, C.I., 1996. The media equation: How people treat computers, television, and new media like real people and places. Cambridge university press.
Risko, E.F., Anderson, N., Sarwal, A., Engelhardt, M. and Kingstone, A., 2012. Everyday attention: Variation in mind wandering and memory in a lecture. Applied Cognitive Psychology, 26(2), pp.234-242.
Roediger III, H.L. and Karpicke, J.D., 2006. The power of testing memory: Basic research and implications for educational practice. Perspectives on psychological science, 1(3), pp.181-210.
Sweller, J., 1988. Cognitive load during problem solving: Effects on learning. Cognitive science, 12(2), pp.257-285
Szpunar, K.K., Khan, N.Y. and Schacter, D.L., 2013. Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences, 110(16), pp.6313-6317.
Vural, O.F., 2013. The Impact of a Question-Embedded Video-based Learning Tool on E-learning. Educational Sciences: Theory and Practice, 13(2), pp.1315-1323.
WildFire www.wildfirlearning.co.uk
Zhang, D., Zhou, L., Briggs, R.O. and Nunamaker Jr, J.F., 2006. Instructional video in e-learning: Assessing the impact of interactive video on learning effectiveness. Information & management, 43(1), pp.15-27.