April 19, 2024

Online Course Design Rubrics, Part 3: Now what?

Author: Kevin Kelly
Go to Source

Part 3 of 3: NOW WHAT?

In Part 1 of this series we looked at and compared seven online course design rubrics as they are today. In Part 2 we looked at why these rubrics have become more important to individuals, programs, institutions, and higher education systems. In this segment, Part 3, I’ll review what’s missing from the rubrics, what’s next, and how various stakeholders are going, or should go, beyond what the current course design rubrics assess.

Ease of use (product)

Most of the rubrics have been shared with a Creative Commons license, making it easy to use or adapt them as a part of your institution’s online course redesign or professional development efforts. However, many of the rubrics also pose some challenges, two of which I’ll cover here: barreling and organization.

Criterion Barreling: Possibly to reduce the number of criteria and/or to shorten their length, some of the rubrics use criteria that evaluate more than one aspect of a course. This issue, known as barreling, can make it difficult to use the rubric accurately to review a course, since a course may meet one part of a criterion, but not another. The Blackboard Exemplary Course Program Rubric addresses this challenge by assigning higher scores “for courses that are strong across all items in [a given] subcategory.” In their next round of revisions, rubric providers should investigate ways to decouple barreled criteria or move to holistic rubrics that ask reviewers to count the number of checkboxes checked in a certain category.

Categorical Organization: As discussed when comparing the rubrics in Part 1 (WHAT?), each rubric provider organizes their course design criteria differently and has a different number of categories. This makes it difficult to go back and find what you need to fix. For example, most of the rubrics put criteria about clear instructions and links to relevant campus policies at the beginning, but Blackboard puts them at the end in its Learner Support category. Ultimately, the categories should be seen as a messy Venn diagram rather than a linear list.

To increase meaning and motivation among online teachers and instructional designers, rubric providers should provide online versions of the rubrics to allow people to sort the criteria in different ways. For example, instructors and course designers could follow the backward design process as they build or redesign an online course. They would start with the objectives or outcomes, then confirm they ask students to demonstrate achievement of those objectives (assessment), then make sure they provide opportunities to practice (activities, interactivity, and assignments), then create and find course materials to support reaching the outcomes (content).

Ease of use (process)

Time for course review: Based on conversations with one of the CCC system’s trained faculty peer reviewers, the course review process involves a) going through an entire online course, b) scoring the rubric, and c) providing useful and actionable written feedback. This takes her an average of ten hours per course, which is analogous to what I have experienced myself–if you are going through an entire course, it’s going to take time. Like the CCC, institutions should train and compensate peer reviewers for the considerable time it takes to support their colleagues.

Tools for course review: Earlier I mentioned that online course design rubrics a) take time to use, and b) do not make it easy to link from document-based rubrics and feedback to elements within an online course. LMS vendors should support online course review by repurposing existing LMS rubric tools and allowing reviewers to share feedback with online instructors. This would involve creating a rubric that sits above a course to review the course as a whole, as opposed to rubrics that sit within a course to review assignments. Reviewers should also be able to tag specific course materials, activities, and assessments in reference to the rubric scores and/or written feedback. If we can annotate PDF documents, images, and video clips (e.g., Classroom Salon, Timelinely), we should be able to do the same with an online course.

Time for course revision: The course revision process takes time—several initiatives tied to the rubrics ask faculty to go through the review and redesign process over an entire academic term or summer break, before implementing the changes. Unless they are given release time, faculty complete this work in addition to their typical full load. For itinerant lecturers, this workload is spread across multiple institutions. Stipends can incentivize or motivate people to do the work, but release time actually may be more valuable to reaching the desired redesign goals quickly.

Tools for course revision: Most of the organizations that have created these rubrics also offer related professional development and/or course redesign support—Quality Matters, SUNY, the CCC system’s @ONE unit, the CSU system’s Quality Assurance project, and UW LaCrosse all offer some level of training and support for people redesigning online courses. Systems like the CCC have moved to local Peer Online Course Review processes to address the bottleneck effect of one central organization having to support everyone.

Exemplars

Only one of the rubrics—the Cal State system’s QLT rubric—has a criterion related to showcasing samples of exemplary student work so students know what their work should look like. However, all of the online course design rubric providers should showcase what meeting and exceeding each rubric criterion looks like. By providing exemplars—courses, modules, materials, assessments, and activities—to novice and veteran online faculty alike, these initiatives would make it easier for those faculty to design or redesign their courses. For example, SUNY’s OSCQR initiative devotes a section of its website to Explanations, Evidence & Examples–there are links to examples for some (but not all) of the 50 criteria and a call for visitors to share their own examples through the OSCQR Examples Contribution Form. Other rubric providers may fold examples into their professional development workshops and/or resources, but public libraries of examples would allow a larger number of faculty to benefit.

Engagement

In the Limitations and Strengths section in Part 1, I mentioned that the majority of the online course design rubric criteria focus on reviewing a course before any student activity begins. Even criteria related to interaction and collaboration measure whether or not participation requirements are explained clearly or collaboration structures have been set up. However, when my program at SF State completed an accreditation application to create a fully online Master’s degree, one application reviewer made this comment and request: “Substantive faculty initiated interaction is required by the Federal government for all distance modalities. Please specifically describe how interaction is monitored and by whom.” Some institutions have created tools to estimate the time that will be devoted to student engagement:

If both the research literature and the accreditation bodies state that interaction, community, and the like are critical to online student persistence and success, then the online course design rubric providers should provide more criteria for and guidance about reviewing faculty-student and student-student interaction after the course has begun.

Further still, LMS providers need to make it possible for instructors to see an average feedback response time. The research shows that timely feedback is critical, but instructors do not have a dashboard that lets them see how long they take to rate or reply to students’ discussion posts, or post grades for assignments.

Empowerment

For the most part, the rubrics and related course redesign efforts focus on the instructor’s side of the equation. However, some research shows that online learners benefit from online readiness orientations (e.g., Cintrón & Lang, 2012; Lorenzi, MacKeogh & Fox, 2004; Lynch, 2001), and need higher levels of self-directed learning skills to succeed (e.g., Azevedo, Cromley, & Seibert, 2004). Therefore, rubric providers should add more criteria related to things like online learner preparation, scaffolding to increase self-direction, and online learner support.

Equity

While eliminating online achievement gaps is a goal for several state-wide initiatives, none of the rubrics compared in this article address equity specifically and/or comprehensively. In a future post I will outline how Peralta Community College District (Oakland, CA) developed the Peralta Equity Rubric in response to this void. As part of the CCC system, the Peralta team has already begun working with the CVC-OEI’s core team1 and its newest cohort that is focused on equity. If we keep seeing the highly positive (and thankful) reactions the Peralta team receives as it shares the rubric via conference presentations and virtual events, expect to see more institutions add equity to their rubrics in the near future.

Efficacy

As stated in the Evidence of impact section in Part 2, more rubric providers need to go beyond what got us to this point—i.e., “research supports these rubric criteria”—and validate these instruments further. It also would help the entire field for existing research to be made more visible. Reports like the Quality Matters updates to “What We’re Learning” (Shattuck, 2015) are a start. Now we need to see more research at higher levels of the Kirkpatrick scale, as well as more granular studies of how course improvements impact different sets of students (e.g., first-generation, Latinx, African-American, academically underprepared). Here is a list of impact research efforts to watch in the near future:

  • The CCC’s newly branded California Virtual Campus-Online Education Initiative plans to conduct more research in its second funding period (2018-2023), which just began last fall.
  • Fourteen CSU campuses are participating in the SQuAIR project—Student Quality Assurance Impact Research—to determine “the impact of QA professional development and course certification on teaching performance and student success in 2018-19 courses.” The SQuAIR project will analyze course completion, pass rates, and grade distribution data, along with student and faculty survey results.

Enforcement

Colleges and universities, community college districts, and state-wide higher education systems should review these rubrics (if they do not already use one) and kick off adoption initiatives that include training for faculty and staff alike. (Kudos if you are already doing this!) These efforts take time, money, and institution-level buy-in, but if online course enrollments continue to increase at the current rate, then institutions must invest in increasing student success across the board, not just with early adopters and interested online teachers.

I’m not sure why all of the NOW WHAT? elements above begin with E, but I am sure that this list is not exhaustive. Further, while I primarily have focused on the rubrics themselves to maintain a reasonable scope for a blog post series, these rubrics are rarely used in a vacuum–the rubric providers and the institutions that adopt them have built robust professional development efforts that use the rubrics in different ways to increase student success. Keep an eye on the MindWires blog for more installments related to online course quality and supporting online student success.


References for citations in this three-part series

  1. Disclosure: OEI is a client of MindWires, and I have been working directly with Peralta CCD.

The post Online Course Design Rubrics, Part 3: Now what? appeared first on e-Literate.

Read more