April 25, 2024

Scaling the Seminar

Author: Michael Feldstein
Go to Source

In my recent first look at Engageli, I wrote about the importance of scaling the humanities seminar. The short version is that budget pressures will force universities to make cuts to programs that are the most costly to run. Since STEM programs tend to generate more grant money and social sciences programs can often teach at least the lower-level courses in large numbers, the humanities are vulnerable. The pedagogy of the seminar constrains the size of even the 100-level composition classes that all students take (in contrast to the vast majority of other 100-level courses). We are not good at scaling seminar-style courses with quality yet. Consequently, programs that rely on seminar-style pedagogy are vulnerable during hard economic times. Faculty are more likely to be let go, and programs are more likely to be cut.

This action can create a vicious cycle because teaching these disciplines with quality currently depends on having sufficient faculty. The more faculty are cut, the weaker the program becomes. The weaker the program is, the fewer students it attracts. The fewer students there are in the program, the less financially healthy the program becomes. As a result, cutting into a program can easily lead to the program’s eventual elimination altogether. While specious rhetoric around humanities degrees not being career-relevant doesn’t help, the real pressure on humanities departments will come from the institution’s per-student cost.

These are good reasons to find ways to scale the seminar, but there are others. In fact, I’m going to argue that scaling the seminar is essential for equity if we want to avoid a two-track system of education—one at expensive universities and another for the rest of us. This is already a serious problem that the current situation is likely to make worse. I’m also going to argue that it is not an oxymoron to talk about large seminars. I think it’s possible to approximate—and even innovate on—the pedagogical affordances of a seminar with significantly more students in one class section.

Seminars should only be for rich white dudes.

Finally, I will argue that one major reason we haven’t arrived at this solution before now is that EdTech is stuck in something of a cul-de-sac. We’ve been pursuing the solutions that have helped improve student self-study by providing machine-generated feedback on formative assessments. While these solutions have proven beneficial in some disciplines and have been especially important in helping students through developmental math, they have hit a wall. Even in the subjects where they work well, these solutions usually have sharply limited value by themselves. They work best when the educators assigning these products use their feedback to reduce (and customize) their lectures while increasing class discussion and project work.

Despite the proliferation of these products, flipped classrooms and other active learning techniques are spreading slowly. And meanwhile, there is a whole swath of disciplines for which machine grading simply doesn’t work. This most broadly applies to any class in which the evaluation of what students say and write is a major part of the teaching process. EdTech has been trying for too long to slam its square peg into a round hole.

I am calling for a new research agenda. One in which we focus on scaling educational conversation and human-to-human engagement with as much energy as we have been investing in machine assessment. I am far from the first person to call for a focus on this problem. For a variety of structural and historical reasons, it hasn’t gained traction. But the situation has changed. Now is the time for such a research agenda. Now is the time for us to figure out how to augment and scale educator-facilitated pedagogy. We can’t just keep trying to automate assessment and hoping the rest will work out. Scaling the seminar will not be easy. But I will argue that the challenges are less daunting than those we face by continuing down the current path while the results could produce higher quality and more equitable education.

This post is the first in a multi-part series.

The gold standard

If we’re being honest, the seminar is still considered the de facto gold standard for all disciplines. However much we may pretend that large lectures are acceptable in teaching some disciplines, the truth is that rich colleges and universities minimize them and market that lack of scale as one of the hallmarks of an elite institution.

On the undergraduate level, one way—perhaps the dominant way—in which top-tier colleges and universities measure and express the quality of the education they offer is through their student/faculty ratio. This is a proxy for the seminar/large-lecture ratio.

This pattern holds online as well. Take a look at the differences in the design of online graduate programs from top-branded universities versus those from older, more access-oriented programs. We can use 2U as a proxy for the former since they essentially built their company by persuading top-tier universities that they can build top-tier online graduate programs. 2U’s motto is “No back row,” and their course designs are heavily weighted toward synchronous classes of constrained size. 2U scales graduate seminars by offering many sections of the identically designed course with more junior (i.e., less expensive) instructors trained to teach the same course design. One way to express this strategy is that 2U is scaling the seminar “horizontally.” They are still offering normal-sized seminars but are pushing down the cost of incrementally adding more sections.

One of the reasons they have been able to sell this approach to faculty senates that would have rejected a larger and more asynchronous design is that 2U has preserved the seminar’s basic structure. Keep in mind, though, that this model tends to work best with one-year degree prices in the $40,000 or $50,000 range. It incrementally improves the scalability of seminars. That’s not a criticism of 2U. It’s simply an acknowledgment that this approach to scaling does not bend the cost curve radically enough to solve the problems that I am calling out.

The reality is that we have to substantially lower the cost of higher education in the United States while increasing the number of available seats in degree programs if we are going to meet equity goals. And we also have reasons to do so for institutional sustainability. If colleges can serve more students—with quality—at a lower cost per student, then we can achieve both equity and sustainability goals.

This shift cannot work if it begins and ends with a mantra to “do more with less.” Access-oriented institutions are already doing more than they can sustain with less than they need to be sustainable. Rather, we need to create tools that are “force multipliers.” We can think of that metaphor in the military sense, i.e., increasing the effectiveness of the human forces we have on the ground, or in the physics sense, i.e., increasing the force that one human can apply by using a pully or an inclined plane or an engine. Technology externalizes techniques into tools that enable us to apply those techniques more effectively with less effort. EdTech should externalize pedagogical techniques in ways that enable us to apply them to help students learn more effectively with less effort per student on the part of the educator.

I believe in the transformative potential of EdTech. I also believe that we have been thinking too narrowly about the kinds of tools we can create and how we can apply them.

The cul-de-sac

EdTech has generally approached scaling by trying to replicate the growth of large lecture classes, i.e., by scaling the machine scoring of assessments. On one level, this makes sense. We know that students do better when they get frequent and timely feedback, and we also know that providing that feedback is incredibly time-consuming. The feedback loop is the fundamental unit of teaching and learning. Students try something, see what happens, and adjust accordingly. Educators give students something to try, watch what happens, and adjust accordingly. The more rapidly and frequently students can get feedback, the more likely students are to learn, and the more rapidly they tend to progress. So scaling feedback loops—both the number that can be provided to each student and the number that can be supported for an entire class at one time—is an important problem to solve for both quality and cost reasons. Automating assessment is a straightforward way to scale feedback loops without burdening the instructor.

Keep in mind that this trend started long before “EdTech” was a word. Universities figured out that TAs are less expensive to use as graders than professors and that Scantrons are less expensive than TAs. Universities were using Scantrons and similar machine grading technologies to scale assessment when I was a student back in the 1980s, at a time when personal email addresses were still relatively uncommon.

Unfortunately, scaling via machine assessment is problematic. Some types of student output—like writing or comments in a discussion or projects—are difficult for machines to evaluate. To do even a passably reliable job, machine assessment inevitably flattens the very notion of feedback. This approach can work OK for helping students master foundational knowledge and skills that are low on Bloom’s Taxonomy but fall apart quickly when trying to provide feedback on the kinds of complex analysis and problem-solving skills that we associate with a college education. There are underutilized methods for pushing at the boundaries of machine assessment somewhat, such as educational games and inquiry-based courseware designs, but these approaches have their limits too.

Think about the feedback that students get in a seminar. Classroom conversation is rich with feedback. In addition to direct feedback from both the instructor and fellow students, there’s indirect feedback that results from multiple people engaged in purposeful conversation and collaborative problem-solving. This high-bandwidth environment for learning feedback doesn’t easily lend itself to simulation or replacement by software-assisted self-study. Likewise, when evaluating student expressions of sophisticated ideas, even the most enthusiastic proponents of writing assessment software have to grant that today’s algorithms are left in the dust on the subtleties compared to even a mediocre human reviewer. And again, while writing and humanities are the most obvious cases, I’ve had math and physics professors tell me that they can facilitate a deeper understanding of even foundational concepts in a small, discussion-based seminar than they can in any other way.

So while conventional machine assessment enables us to help more students achieve passable literacy levels in reasonably well-structured knowledge domains, it’s not consistently good at teaching critical thinking in these domains. And it’s virtually useless at teaching fluent expression or collaborative problem solving—two key skills for the modern workplace (not to mention the modern democracy, such as it is).

The fundamental limiter to scaling the class is the human instructor, who has only so many hours in the day. Augmenting instructor feedback with machines the approach that EdTech has emphasized so far. But what if we could scale the student peer-to-peer feedback with quality? While instructors do not scale as class sizes grow, peers scale as the class size grows by definition. Instead of putting all our eggs in the one basket of trying to make computers provide feedback that is as good as humans, what if we focused more energy on getting students to provide feedback that is as good as the instructor’s?

The new “blended” and the new “flipped”

COVID-19 is teaching us a lot in a hurry about the quality of that high-bandwidth educational feedback machine that we call the classroom. And we are already beginning to see products like Engageli begin to respond to those lessons learned. We can begin to see a world in which those rich synchronous educational conversations can take place fully online or even in some random and unpredictable mix of some participants being online and others being in a physical room together. We can aspire to a new type of “blended” classes that mix synchronous and asynchronous experiences rather than a mix of online and face-to-face.

As I wrote in my Engageli first look post, that would be great but not enough. In an environment where software mediates the face-to-face as well as the online synchronous class experience, the software could scaffold pedagogy in ways that a physical classroom cannot—for both the educator and the students. At the tail end of my Engageli review, I started to explore ways in which software can help improve the quality of human-to-human feedback. I used the example of Riff Analytics, which helps students to recognize whether they are speaking out, taking turns, and affirming statements made by their peers in group conversations. In other words, Riff helps students to become more effective at collaborative, purposeful, and equitable peer conversations.

We can take this idea much further, particularly if we blend synchronous and asynchronous tools. The ideal we should be trying to replicate and improve on is not just the blended classroom but the flipped classroom. One of the perennial challenges with flipping the classroom is that facilitating consistently productive group work among students is incredibly hard. It’s a lot of work that sometimes fails regardless of the instructor’s skill and best efforts. It’s hard even in a small class when conducted by an instructor who practices it regularly. It’s really hard in a large class when conducted by an instructor who is mostly trained and experienced in a conventional lecture model.

We shouldn’t be surprised at the stories of instructors who tried to flip their classes and had horrible experiences. We haven’t provided them with the tools that they need. With its competency-based analytics, courseware can help faculty prepare by giving them a sense of the foundational knowledge that students may be either mastering or struggling with. But by itself, that information is not sufficient for a successful flipped classroom. The hardest part about flipping a class is getting the groups to collaborate effectively and consistently. I can’t think of any widely adopted EdTech tools that are specifically designed to help with this part of the challenge.

I think it’s possible to scale the seminar and active learning across most disciplines. I think we should put more energy into improving the quality of peer feedback rather than single-mindedly focusing on improving machine feedback. There is research, and there are products that can help to achieve this goal. Both have been chronically undernourished because all the glory (and money) has gone to machine assessment. But this historic moment we are all living is creating an opportunity to rebalance our efforts. COVID-19 has forced EdTech—and many educational institutions—to think about the richness of human-to-human conversational experiences with more clarity and specificity.

In my next post in this series, I’ll describe how we might tackle this challenge using the hardest one of the courses to scale that I can think of—English Composition—as my example.

The post Scaling the Seminar appeared first on e-Literate.

Read more