Guest blog: Multiple choice madness.

One of my great privileges is that I have many smart friends. Les Perelman is Director of Writing Across the Curriculum in the Program in Writing and Humanistic Studies at the Massachusetts Institute of Technology. Here he responds to the recent testing scandal in Atlanta and talks about the tests involved in it.

First, let me state explicitly that I in no way condone the cheating that has gone on in Atlanta.    Second, I am limiting my remarks to the English Language Arts because that is the only area in which I have expertise.

With those reservations, the Georgia State Criterion Reference Competency Testing program, at least in its English Language Arts Component, is a perfect example of mass-market testing run amok.  The teachers and administrators should not have erased incorrect answers and replaced them with correct ones.  The core problem, however, is that multiple choice mass-market tests do not assess what good teachers teach; they assess what is easy and, especially, cheap for the testing companies to score.  The Georgia test was designed by one of the major Big Test companies, the educational equivalent of Big Pharma, CTB / McGraw Hill, a division of McGraw Hill, the company whose other big money making division, Standard and Poor’s, helped create the debt crisis of 2008.

The key to cheap testing is multiple choice questions.  They are easy and inexpensive to score.  Unfortunately, they often do not measure the cognitive processes they purport to measure.  The State of Georgia’s English Language Arts Content Descriptions lists as one of the content domains  for the fourth “Research/Writing Process” which it states “refers to students’ skill in using and analyzing the purpose of research and technology, using resources to support the writing process, and evaluating the various strategies, styles, and purposes of written organization.”  The document lists as one of its targets the ability  to “analyze various reference materials by determining the appropriate source for a given situation and using information from a given source, such as: Internet, atlas, encyclopedia, magazines, thesaurus, newspaper.

I am a big proponent of information literacy.  However, how does the Georgia Program test it?  Here is a sample item from the state’s Grade 4 Study Guide:

6. Where would this information most likely be found?

 The town of Livingston was founded in 1825. Henry Keane was elected

the first mayor of Livingston in 1835. The Keane House is the oldest

home still standing in Essex County.

 A a visitor’s guide to Livingston

B a map of the town of Livingston

C an interview with a family who lives in Livingston

D a newspaper article about Livingston’s new mayor

The answer is A.  But should a well taught information literate nine or ten-year old be able to figure it out?  The English Language Arts Content Descriptions lists standard reference sources and a visitor’s guide is not one of them.  A town visitor’s guide, whether one found in a hotel room or one picked up at a Chamber of Commerce or Visitor Center is not an authoritative reference source.  Because visitor’s guides are not usually taught as a reference source, probably only students who have stayed in hotels that have visitor guides in the room, that is middle and upper class student, have a reasonable chance of getting the question right.

Moreover, the structure of the question reverses the research process.  There are few cases when one has the information and is seeking its source.  The common situation is that someone has a question and wants to know what is the most likely and most authoritative source that will have the answer.  A real test of student learning would involve students producing short written research plans that could be evaluated by people not bubble marks that are graded by a Scantron machine.  The Georgia Department of Education is aware that written answers would be better than multiple-choice questions.  On its web site explaining the test, it states in assessment jargon, “Currently, the mandated end-of-year assessments contain selected-response items only; however, a small number of constructed-response items may be included in subsequent years.”  If I were a teacher in Georgia and knew that my school was going to be evaluated based on students’ performance on such an absurd test, I would be very angry.  I wouldn’t cheat, but I can very well understand the motivation of those who did.

Georgia actually does know better.  Concurrent with the Georgia State Criterion Reference Competency Testing, Georgia  administers a performance-based writing assessment to students in grades three, five, eight, and eleven.  These assessments consist of real writing assignments and receive scores from two readers on four features: Ideas, Organization, Style, and Conventions.  These writing assessments are mostly used to provide diagnostic feedback to teachers, students, and parents about individual performance, as well as classroom and school summary statistics.   High School students, however, are required to achieve a passing score on the eleventh grade writing assessment in order to graduate.  These writing assessments are far from perfect, but they at least try to measure real learning rather than the McLearning of the multiple choice tests.  Moreover, no one has been accused of erasing and changing the answers to writing assessment.  They are a more authentic measure of learning, and it would be very difficult to rewrite thousands of student essays.

http://public.doe.k12.ga.us/ci_testing.aspx

Les Perelman

Leave a comment