April 20, 2024

Carnegie Mellon and Lumen Learning Announce EEP-Relevant Collaboration

Author: Michael Feldstein
Go to Source

Late last week, Carnegie Mellon University (CMU) and Lumen Learning jointly issued a press release announcing their collaboration on an effort to integrate the Lumen-developed RISE analytical framework for curricular materials improvement analysis into the toolkit that Carnegie Mellon announced it will be contributing via open licenses (and unveiling at the Empirical Educator Project (EEP) summit that they are hosting in May).

To be clear, Lumen and Carnegie Mellon are long-time collaborators, and this particular project probably would have happened without either EEP or CMU’s decision to contribute the software that they are now openly licensing. But it is worth talking about in this context for two reasons. First, it provides a great, simple, easy-to-understand example of a subset of the kinds of collaborations we hope to catalyze. And second, it illustrates how CMU’s contribution and the growth of the EEP network can amplify the value of such contributions.

RISE

The RISE framework is pretty easy to understand. RISE stands for Resource Inspection, Selection, and Enhancement. Their focus is on using it to improve Open Educational Resources (OER) because that’s what they do, but there’s nothing about RISE that only works with OER. As long as you have the right to modify the curricular materials you are working with—even if that means removing something proprietary and replacing it with something of your own making—then the RISE framework is potentially useful.

From the paper:

In order to continuously improve open educational resources, an automated process and framework is needed to make course content improvement practical, inexpensive, and efficient. One way that resources could be programmatically identified is to use a metric combining resource use and student grade on the corresponding outcome to identify whether the resource was similar to or different than other resources. Resources that were significantly different than others can be flagged for examination by instructional designers to determine why the resource was more or less effective than other resources. To achieve this, we propose the Resource Inspection, Selection, and Enhancement (RISE) Framework as a simple framework for using learning analytics to identify open educational resources that are good candidates for improvement efforts.

The framework assumes that both OER content and assessment items have been explicitly aligned with learning outcomes, allowing designers or evaluators to connect OER to the specific assessments whose success they are designed to facilitate. In other words, learning outcome alignment of both content and assessment is critical to enabling the proposed framework. Our framework is flexible regarding the number of resources aligned with a single outcome and the number of items assessing a single outcome.

The framework is composed of a 2 x 2 matrix. Student grade on assessment is on the y-axis. The x-axis is more flexible, and can include resource usage metrics such as pageviewstime spent, or content page ratings. Each resource can be classified as either high or low on each axis by splitting resources into categories based on the median value. By locating each resource within this matrix, we can examine the relationship between resource usage and student performance on related assessments. In Figure 2, we have identified possible reasons that may cause a resource to be categorized in a particular quadrant using resource use (x-axis) and grades (y-axis).

Figure 2. A partial list of reasons OER might receive a particular classification within the RISE framework.

By utilizing this framework, designers can identify resources in their courses that are good candidates for additional improvement efforts. For instance, if a resource is in the High Use, High Grades quadrant, it may act as a model for other resources in the class. If a resource falls into the Low Use, Low Grades quadrant, it may warrant further evaluation by the designers to understand why students are ignoring it or why it is not contributing to student success. The goal of the framework is not to make specific design recommendations, but to provide a means of identifying resources that should be evaluated and improved.

Let’s break this down.

RISE is designed to work with a certain type of common course design, where content and assessment items are both aligned to learning objectives. This design paradigm doesn’t work for every course, but it works for many courses. The work of aligning the course content and assessment questions with specific learning objectives is intended to pay dividends in terms of helping the course designers and instructors gain added visibility into whether their course design is accomplishing what it was intended to accomplish. The 2×2 matrix in the RISE paper captures this value rather intuitively. Let’s look at it again:

Each box captures potential explanations that would be fairly obvious candidates to most instructors. For example, if students are spending a lot of time looking at the content but still scoring poorly on related test questions, some possible explanations are that (1) the teaching content is poorly designed, (2) assessment questions are poorly written, or (3) the concept is hard for students to learn. There may be other explanations as well. But just seeing the correlation that students are spending a lot of time on particular content are still doing poorly on particular related assessment learning questions leads the instructor and the content designer (who may or may not be the same person) to ask useful questions. And then there is some craft at the end about thinking through how to deal with the content that has been identified as potentially problematic.

This isn’t magic. It’s not a robot tutor in the sky. In fact, it’s almost the antithesis. It’s so sensible that it verges on boring. It’s hygiene. Everybody who teaches with this kind of course design should regularly tune those courses in this way, as should everybody who builds courses that are designed this way. But that’s like saying everybody should brush their teeth at least twice a day. It’s not sexy.

Also, easy to understand and easy to do are two different things. Even assuming that your curricular materials are designed this way and that you have sufficient rights to modify them, different courses live in different platforms. While you don’t need to get a lot of sophisticated data to do this analysis—just basic Google Analytics-style page usage and item-level assessment data—it will take a little bit of technical know-how, and the details will be different on each platform. Once you have the data, you will then need to be able to do a little statistical analysis. There isn’t much math in this paper and what little there is isn’t very complicated, but it is still math. Not everybody will feel comfortable with it.

The typical way the sector has handled this problem has been to put pressure on vendors as consumers to add this capability as a feature to their products. But that process is slow and uncertain. Worse, each vendor will likely implement the feature slightly differently and non-transparently, which creates a greater challenge for the last point of friction. Features like this require a little bit of literacy to use well. Everybody knows the mantra “correlation is not causation,” but it is better thought of as the closest thing that Western scientific thinking can get to Zen koan.1 If you think you’ve plumbed the depths of meaning of that phrase, then you probably haven’t. If we want educators to understand both the value and the limitations of working with data, then they need to have absolute clarity and consistency regarding what those analytics widgets are telling them. Having ten widgets in different platforms telling them almost but not quite the same things in ways that are hard to differentiate will do more harm than good.

And this is where we fail.

While the world is off chasing robot tutors and self-driving cars, we are leaving many, many tools like RISE just lying on the floor, unused and largely unusable, for the simple reason that we have not taken the extra steps necessary to make them easy enough and intuitive enough for non-technical faculty to adopt. And by tools, I mean methods. This isn’t about technology. It’s about literacy. Why should we expect academics, of all people, to trust analytical methods that nobody has bothered to explain to them? They don’t need to understand how to do the math, but they do need to understand what the math is doing. And they need to trust that somebody that they trust is verifying that the math is doing what they think it is doing. They need to know that peer review is at work, even if they are not active participants in it.

Making RISE shine

This is where CMU’s contribution and EEP can help. LearnSphere is the particular portion of the CMU contribution into which RISE will be integrated. I use the word “portion” because LearnSphere itself is a composite project consisting of a few different components that CMU collectively describes as “a community data infrastructure to support learning improvement online.” I might alternatively describe it as a cloud-based educational research collaboration platform. It is probably best known for its DataShop component, which is designed to share research learning research data sets.

One of the more recent but extremely interesting additions to LearnSphere is called Tigris, which provides a separate research workflow layer. Suppose that you wanted to run a RISE analysis on your course data, in whatever platform it happens to be in. Lumen Learning is contributing the statistical programming package for RISE that will be imported into Tigris. If you happen to be statistically fluent, you can open up that package and inspect it. If you aren’t technical, don’t worry. You’ll be able to grab the workflow using drag-and-drop, import your data, and see the results.

Again, this kind of contribution was possible before CMU decided to make its open source contribution and before EEP existed. They have been cloud hosting LearnSphere for collaborative research use for some time now.

But now they also have an ecosystem.

By contributing so much under open license, along with the major accompanying effort to make that contribution ready for public consumption, CMU is making massive declaration to the world about their seriousness regarding research collaboration. It is a magnet. Now Lumen Learning’s contribution isn’t simply an isolated event. It is an early leader with more to come. Expect more vendors to contribute algorithms and to announce data export compatibility. Expect universities to begin adopting LearnSphere, either via CMU’s hosted instance or their own instance, made possible the full stack being released under an open source license. This will start with the group that will gather at the EEP summit at CMU on May 6th and 7th, because one has to start somewhere. That is the pilot group. But it will grow. (And LearnSphere is only part of CMU’s total contribution.)

With this kind of an ecosystem, we can create an environment in which practically useful innovations can spread much more quickly (and cheaply) which vendors regardless of size or marketing budget can be rewarded in the marketplace based on their willingness to make practical contributions of educational tools and methods that can be useful to customers and non-customers alike. Lumen Learning has made a contribution with the RISE research. They now want to make a further contribution to make that research more practically useful to customers and non-customers alike. CMU’s contributed infrastructure and the EEP network will give us an opportunity reward that kind of behavior with credit and attention.

That is the kind of world I want to live in.

  1. Outside of quantum mechanics, at least.

The post Carnegie Mellon and Lumen Learning Announce EEP-Relevant Collaboration appeared first on e-Literate.

Read more