April 16, 2024

Deeply Flawed GMU Report on Online Education Asks Good Questions But Provides Misguided Analysis

Author: Phil Hill
Go to Source

Another year and another deeply flawed report about online education in US higher education, this time by Spiros Protopsaltis (associate professor and director of the Center for Education Policy and Evaluation at George Mason University, as well as former aide to Senate Democrats) and Sandy Baum (a fellow at the Urban Institute and professor emerita of economics at Skidmore College, as well as former advisor to Hillary Clinton’s presidential campaign). As Inside Higher Ed described the report, titled “Does Online Education Live Up to Its Promise? A Look at the Evidence and Implications for Federal Policy”:

Online education has not lived up to its potential, according to a new report, which said fully online course work contributes to socioeconomic and racial achievement gaps while failing to be more affordable than traditional courses.

The report aims to make a research-driven case discouraging federal policy makers from pulling back on consumer protections in the name of educational innovation.

In many ways this report takes a similar approach to the report by Caroline Hoxby from Stanford University, which was subsequently withdrawn, in asking important questions but providing flawed analysis to support conclusions. But unlike the previous report, the GMU one documents its sources well with 165 end notes, and for the most part this new report describes the underlying analysis accurately. Where the major problems arise is in conflating online education in general with the for-profit sector and in drawing conclusions that are not supported by the evidence.

The report is not easy to wade through, largely from its wide-ranging discussion of for-profits, online history, past federal policy, a snapshot of research on learning outcomes, and a discussion of current policy debates. Let’s take the primary conclusions and discuss the analysis provided.

“Online education is the fastest-growing segment of higher education and its growth is overrepresented in the for-profit sector.”

The report accurately describes the growth of online education, rising to point where one in three postsecondary students take at least one online course.

Figure 1 online ed growth

There is a disturbing tendency to describe this growth as “explosive” (mentioned five times in report) and an unexplained reliance in many cases on six year old data when new data exists. But the conclusion about growth is accurate.

The phrasing “overrepresented in the for-profit sector” and “concentration in the for-profit sector” in describing online education is very misleading, however. It is true that for-profit schools have a larger percentage of their students studying fully online, but the topic of the report is online education in general, and for-profits represent a rapidly shrinking minority of this case. Never mentioned in the report is the most salient point about for-profits – the sector is in major decline. As documented by IPEDS:

For-profit enrollment trends 2002-2016

This decline seems relevant, even if you then look at fully-online programs (e-Literate analysis of IPEDS data).

Trends in online enrollment by sector

Even in 2012, just two years after the for-profit peak, the for-profit sector accounted for less than 35% of fully online student enrollment, and as of Fall 2017 it was down to 21% with a clear trend. For-profits are rapidly becoming less and less relevant to the topic of online education, with no evidence to back up Protopsaltis claims that the for-profit sector is about to make a big comeback. It is high time that responsible analysts and scholars cease conflating online ed with for-profit schools, and the authors of this report should know better. If you want to study the for-profit sector, then describe it accurately and don’t extrapolate beyond what the data supports.

“A wide range of audiences and stakeholders—including faculty and academic leaders, employers and the general public—are skeptical about the quality and value of online education, which they view as inferior to face-to-face education.”

While I find it strange to put this much emphasis on perceptions from an organization that purports to provide “timely, sound, evidence-based analysis”, but perceptions are somewhat important to understand. The body of the report describes a variety of research sources, but it is inaccurate to summarize that the wide range of stakeholders “view [online education] as inferior to face-to-face education.” Especially if you look at more recent data sources.

Consider the 2018 Inside Higher Ed / Gallup survey of faculty (starting page 32), where they found that faculty with actual experience teaching online have surprising high confidence in the quality potential of online education. For those who have taught online, the percentage that agree or strongly agree that “for-credit online courses can achieve student learning outcomes that are at least equivalent to those of in-person courses in the following context”, 39% for any institution, 52% at my institution, 54% in my department or discipline, and 58% in courses that I teach. Put simply, a majority of faculty who have experience teaching online think results can be at least equivalent to in-person.

Consider the 2019 Inside Higher Ed / Gallup survey of Chief Academic Officers, where fully 83% of them report plans to increase investment in online programs at their institution.

Consider the 2018 Northeastern University Survey on the Use and Value of Educational Credentials in Hiring, where they found that “Online credentials are now mainstream, with a solid majority (61%) of HR leaders believing that credentials earned online are of generally equal quality to those completed in-person, up from lower percentages in years past.”

Yes, perception issues are important. But the report’s conclusions are misleading and out of date.

“Students in online education, and in particular underprepared and disadvantaged students, underperform and on average, experience poor outcomes. Gaps in educational attainment across socioeconomic groups are even larger in online than in traditional coursework.”

This topic deserves its own report, and the GMU authors are right to point out that simply comparing online to face-to-face outcomes can obscure the important issue of underprepared and disadvantaged student experiences. On the surface, the conclusion about achievement gaps being “larger in online than in traditional coursework” is also accurate. But the more important question is not whether there is a problem, but rather how to minimize or reverse the achievement gap.

The report references several studies from the California Community College system, mostly from years ago, describing how students “were less likely to complete online courses and when they completed them, less likely to pass them”. Yet the authors did not look at the trends within this system, as easily found in the most recent Distance Education report from the system, where the gap in performance overall for online versus tradition is closing rapidly.

CCCS improvements in gap of online ed

More importantly, the achievement gains applied to all ethnic groups.

CCCS Online performance by ethnicity

It does appear that the performance gaps within online education are not closing by ethnicity despite the broad improvements. That is a real question to consider. Rather than viewing a simplistic view that online = bad results, we should focus on how to maintain current improvements while figuring out how to do even better in providing equal opportunity.

“Online education has failed to improve affordability, frequently costs more, and does not produce a positive return on investment.”

This conclusion is largely based on the NBER Hoxby report that was subsequently withdrawn, and for which I provided a detailed critique. I was not able to get a response from the report author. Beyond a gross mischaracterization of the source data, the Hoxby report made a fundamental flaw in its ROI analysis.

This view of online education – students choosing between non-selective face-to-face institutions or online institutions – takes a zero-sum approach, as if you have the same student population just choosing between institution types. This view ignores the large and growing number of working adults who can only attend college – often in degree-completion programs or masters level programs – because of an online option. Their real choice should be seen as online institution or not at all.

The GMU report relies on the withdrawn Hoxby report and does not even describe that it was withdrawn.

There is an excellent point made that pricing for students has largely not been lower for online education, but there are specific examples (UF Online, SNHU, WGU, to name a few) where they specifically provide much lower-priced offerings to students than comparable face-to-face programs. it would be interesting to study enrollment trends and student outcomes for lower-priced online programs compared to comparably-priced programs.

“Regular and substantive student-instructor interactivity is a key determinant of quality in online education; it leads to improved student satisfaction, learning, and outcomes.”

“Online students desire greater student-instructor interaction and the online education community is also calling for a stronger focus on such interactivity to address a widely recognized shortcoming of current online offerings.”

These last two points get to the primary purpose of the GMU report – current federal policy making efforts that include a re-evaluation of the Regular and Substantive Interaction (RSI) requirement for programs to be classified as online education and no correspondence courses.

The GMU report describes a large body of work documenting the importance of interaction to online student success, and the report accurately describes how “the online education community has also emphasized recently the importance of student-instructor interaction for ensuring quality.” This point is crucial – the vast majority of educators working in online education understand and accept the importance of interaction; there is not significant disagreement on the subject.

What the GMU report gets wrong is conflating actual quality interaction within courses with federal regulations. Much of the basis of the GMU analysis is a series of Office of Inspector General (OIG) reports calling out weak implementation of the RSI regulations. In the biggest case – a report on Western Governors University (WGU) and its competency-based model – this conflation is unwarranted, as I described in a detailed analysis of that action. There were two particular problems with the OIG findings in my view – the first is that the OIG defined their own terms due to the ambiguous nature of the RSI regulation.

The OIG used a binary role-based approach (you are an instructor or you are not) leading to conclusion that only course mentors and evaluators could be considered as instructors, however. The basis of this determination was an instructor must “provide instruction on course content” – clearly a content-dissemination view that rejects alternative pedagogies. And this interpretation that the OIG treats as unambiguous is not based on law, regulations, or commonly-accepted educational terminology. [snip]

This is why I call the audit methodology as hyper-literal. Somehow the OIG thinks they can determine – without any disagreement or ambiguity – the “ordinary meaning of those terms” based on their own interpretations.

The second problem was that the OIG did not evaluate the actual courses or even address the issue of course quality.

Also note that the determination was entirely based on course design materials – think syllabus and course outlines. The OIG did not look at interactions arising during the course of actual course work, just whether there were pre-defined webinars, meetings, and student-instructor interactions. [snip]

These views essentially reject not just WGU’s approach to CBE but also the broader movement of faculty from “sage on the stage to guide on the side”. Instructors, from the OIG view, must provide instruction on course content and interactions must be pre-planned in the course design materials, at least for online courses.

The OIG did not look at student outcomes, applied its own hyper-literal translation of an ambiguous regulation, and did not look at the course interactions themselves – just whether pre-planned course materials described future course interactions. Note, however, that despite the weakness of the OIG report, this does not mean that WGU is off the hook. Likewise, this report’s over-reliance on the OIG reports mistakes regulation for actual interaction quality, but that does not mean that there is not an issue where many or most online courses could improve faculty-student interaction.

It is broadly understood that the RSI regulation is important but flawed. I agree with the GMU report that a simple elimination of the regulation would be a mistake. But it is overly simplistic and completely subjective for the GMU report to conclude that “unbundled faculty models that have difficulty complying should make changes to match the law instead of changing the law to match the needs of such models.” That is a policy position and not based on “timely, sound, evidence-based analysis”.

In Conclusion

This last point gets to the danger of this GMU report. It is a subjective set of policy recommendations disguised as extensively-documented evidence-based research. There is value in the questions asked, much of the research documented in the footnotes and body of the report; and there is a clear policy position presented on regular and substantive interaction. But there is more harm than good from the report due to the mischaracterizations, selective data usage, and flawed analysis provided. Read it as a policy paper and not a research report.

Paul Fain from Inside Higher Ed provided a valuable, pithy summary at the end of his article on the report.

The report’s co-authors and its critics agreed that further research is needed on the rapidly evolving field of online education, particularly as more high-quality colleges and universities ramp up their online offerings.

The post Deeply Flawed GMU Report on Online Education Asks Good Questions But Provides Misguided Analysis appeared first on e-Literate.

Read more