Here’s my reaction to the BlendKit Week 3 Reading, which focused on assessing students’ learning.

As an instructional designer, I know that the best assessment methods are not always the easiest to grade.  Take research papers, which, broken up into an outline, rough draft, and final version, provide 2 distinct opportunities for instructor feedback to correct issues before students submit the final, high-stakes version.  But it also triples the amount of grading on the instructor’s part (I have been there myself).  It seems that with similar assessment methods, there is an inverse relationship between the quality of the student learning experience, and the time it takes to grade.

While there are mitigation strategies such as holistic and single-point rubrics to assist with grading formative assessments, there’s nothing quite so attractive as the ubiquitous multiple-choice exam.  Many textbook publishers already include them when adopting their textbook, and other than the initial setup of dates, durations, and other settings, they require zero input from the instructor (including grading).  There’s a reason multiple-choice exams are so popular.

But when most people think of multiple-choice exams, they think of simple, fill-in-the-blank, one-word answer choices.  Multiple-choice tests don’t have to be that way, though.  Take the GREs, for instance, which I took recently.  Most questions are multiple-choice, but you would be hard-pressed to find many people who think those questions are easy!  As the BlendKit Reader discusses:

While many online quizzes (especially many of those available as supplemental instructor resources) focus on low-level factual recall, multiple-choice items may be written at the higher application, analysis, synthesis, or evaluation levels. Such items often involve some sort of scenario aimed at promoting learning transfer from one context to another. Additional strategies might require students to view a chart/graph and select the most accurate interpretation from among several alternatives or even to collaborate with classmates in selecting the best justification statement for why a given answer is correct prior to individually submitting their quizzes.

Using Bloom’s Taxonomy as a guide, most multiple-choice questions are written at the “identify” level, such as this question obtained from one of the resources linked from the BlendKit Reader:

A three-year-old child can usually be expected to:

a. Cry when separated from his or her mother

b. Have imaginary friends

c. Play with other children of the same age

d. Constantly argue with older siblings

But according to the quote above, multiple-choice questions can also be written at every stage of Bloom’s, including the Analysis and Synthesis levels.  These questions usually describe a scenario, or provide a graph or other data, and ask readers to judge the “best” course of action (i.e., multiple answers may be correct, but there should only be a single “best” answer), as in the following question taken from the same resource:

A fourteen-year-old girl refuses to attend school despite pleading by both of her parents. A physical examination reveals no medical problem, and a joint assessment by the social worker and psychiatrist indicates no apparent reactive element instigating the sudden school avoidance. The girl appears depressed; she herself is unsure of why she is not attending school. The BEST intervention by the social worker is to:

a. Recommend that the girl remain home for at least another week without pressure

b. Intervene with school authorities to provide her with home tutoring when absent

c. Urge all concerned to apply pressure to achieve return to school and arrange an appointment with the girl

d. Begin to assist the family to explore alternative schools for possible transfer

Writing these types of test questions is not easy and will take some effort.  But you get two awesome benefits: the ability to have your exams automatically graded (who wants to spend time grading?), and you can gauge students’ true understanding of the material, rather than simply what they can cram and memorize the night before.

Leave a Reply

Your email address will not be published. Required fields are marked *