In this week’s BlendKit 2017 Reader, the topic was “Quality Assurance in Blended Learning.” I’d like to discuss my thoughts on the broader subject of quality assurance as it pertains to courses in general, not only blended ones.
Since starting at Rutgers University in January 2015 (about 2.5 years ago now) as an Instructional Designer, I have fully embraced the rubric and course review process developed by Quality Matters (QM). I have completed 3 of their professional development courses: Applying the Quality Matters Rubric (APPQMR), Peer Reviewer Course (PRC), and Master Reviewer Certification (MRC). I have also developed an internal course review process (not leading to official QM recognition) to review online courses that I have used for course subjects ranging from accounting to political science to linguistics courses. And to show that I practice what I preach, I have taught both face-to-face and online courses in information technology and computer programming. In sum, I am heavily invested in Quality Matters, quality teaching, and I fully believe that course reviews are worthwhile endeavors.
That’s not to say that course reviews do not have their flaws. The standard process works like this:
- Initially, each person on a 3-person review team (Review Chair, Peer Reviewer, and Subject Matter Expert) reviews the course independently during a defined start and end date — typically 3-4 weeks.
- The 3 reviewers then meet to discuss their findings and finalize a report.
- Once the report is finalized, the Review Chair informs the instructor of the outcome, and works with them to implement the recommendations from the review.
The estimated time to complete Steps 1 and 2 is about 20 hours. Multiply that by 3 reviewers and you get 60 hours. Add to that the time it takes to implement the recommendations, which will vary, and you have a significant investment of time and effort. Unless there are incentives (financial or otherwise) for faculty to complete the amendments, it can be difficult to motivate them to do so, which defeats the entire purpose of the review itself.
And even with the significant investment of time and resources, there’s still a big piece missing. As quoted from BlendKit Reader:
Limiting the scope of blended or online course quality to considerations of the designed environment results in a significant blind spot.
What’s that blind spot? The effectiveness of the teaching itself. Quality Matters only looks at the design of the course itself, not how effective the instructor is at imparting knowledge onto the students. How detailed is their feedback to the students? How responsive is the instructor to the students? Do they seek alternative resources when students are stuck?
This topic has been on my mind lately as I work to improve our course review process, with the aim of ensuring faculty implement as many of the review recommendations as possible while minimizing the work they need to do.
For instance, one model I’m considering might involve fitting the reviewed course’s contents into a pre-defined template that already fulfills several of the QM standards (ones related to technical support, academic support resources, etc.), so that the instructor can focus on their content and not as much on boilerplate language, course layout, or organization of the course within the learning management system (LMS).
Regardless of what happens, this week’s BlendKit Reader has given me a lot to think about. Quality assurance is something we so desperately need in higher education, but it can be tough to find the right balance between resources, quality, and time investment.