Archives For tests

Rosetta

The Rosetta Stone currently located in The British Museum in London, England.

When I stood in front of the Rosetta Stone in the British Museum in London, I had to wiggle my way through the blockade of tourists who were trying to photograph the small black tablet. Since the stone was encased in glass, I knew the reflections from the camera flashes would result in poor quality photos. Once I had my few seconds before the 2200 year old tablet, I headed off to the gift shop to secure a clear photo of the Rosetta Stone and a small plaster recast of the dark black stone; both yielded far more details than I saw when I was squeezed by the crowd.

The face of the Rosetta Stone, one of two tablets, is etched with three different scripts, each spelling out the same decree issued by King Ptolemy V from Memphis (Egypt) in 196 BCE. These inscriptions translate Ptolemy’s decree in three scripts: the upper text is Ancient Egyptian hieroglyphs, the middle portion Demotic script, and the lowest Ancient Greek. Because the Rosetta Stone presented the same text in each script (with a few minor differences among them), the tablet provided the key to our modern understanding of Egyptian hieroglyphs.

Since the Rosetta Stone is often used as a metaphor for using an essential clue to a new field of knowledge, why not use the Rosetta Stone as a metaphor for explaining the role of data, specifically standardized test data, in informing classroom instruction? Imagine that different stakeholders, (school administrators, teachers, students, parents and test creators ) who look at the results of standardized tests are like those who crowd before the Rosetta Stone trying to decipher its meaning.

The first linguists who worked with the Rosetta Stone were able to look closely, touch and take rubbings of the different alphabets and hieroglyphics as they translated each of the texts. They spent time puzzling over the different alphabets, and they constructed primers to help decode each of the languages. They could see the variations in the engraver’s strokes; they could examine nuances in chisel marks that formed the symbols. As to the contents of the missing or damaged sections, the linguists made educated guesses.

Likewise, in education there are those who are knowledgeable in translating the information from standardized tests, those who have spent time examining data looking for patterns of trends comparing collective or individual student progress over time or perhaps comparing student cohorts. The metaphor of the Rosetta Stone, however, fails in directly comparing the different forms of data collected in the multitude of standardized tests. Each test or assessment is constructed as a single metric; the translations of one standardized test to another are not the same. For example, the state mandated Connecticut Mastery Tests (CMT-grades 3-8)  are not correlated to a diagnostic test for reading such as a diagnostic reading assessment (DAR). The Connecticut Academic Performance Test (CAPT Grade 10) cannot be directly compared to the PSAT or ACT or the NAEP, and none of these standardized tests are comparable to each other.

Consider also how the linguists who studied the Rosetta Stone spent time and lingered over the different interpretations in order to translate the symbols in the differing alphabets. They studied a finite number of symbols that related to a finite statement fixed in time.

In contrast, standardized testing associated with education reform is on the upswing, and today’s educators must review continuous waves of incoming data. Often, when the results are finally released, their value to inform classroom instruction has been compromised. These results serve only to inform educators of what student could do months earlier, not what they are doing in real time. Just like the time stamped images each tourist’s camera records of the Rosetta Stone, standardized tests are just time stamped snapshots of past student performance.

How ironic, then, that so much media attention is given over to the results of the standardized tests in informing the public about student progress. How like the crowds snapping blurry photos around the Rosetta Stone are those who do not understand what exactly what each standardized test measures.

What they should appreciate is that prioritizing the streams of data is key to improving instruction, and the day to day collection of information in a classroom is arguably a more accurate snapshot of student ability and progress.

There are the classroom assessments that teachers record on progress reports/report cards: homework, quizzes, tests, projects that measure student achievement in meeting grade level standards and requirements. Then there is the “third leg” of data, the anecdotal data that can be used to inform instruction. The anecdotal data may be in the form of noting a student sleeping in class (“Has she been up late?”), reviewing a lesson plan that did not work (“I should have used a picture to help them understand”), or reporting a fire drill during testing (“Interruptions distracted the students”). Here the multiple forms of data collected to measure student progress are fluid and always changing, and translating these results is like the linguists’ experience of the hands-on translation of the Rosetta Stone noting the variations and nuances and making educated guesses.

The standardized tests results are most useful in determining trends, and if translated correctly, these results can help educators adjust curriculum and/or instructional strategies. But these test results are antiquated in relation to tracking student learning. Students are not the same day to day, week to week, semester to semester. Their lives are not prescribed in flat symbols, rather students live lives of constant change as they evolve, grow, and learn.

As the Rosetta Stone was critical to understanding texts of the Ancient World, our standardized tests are the “ancient texts” of contemporary education. Standardized tests cannot be the only measurement the public gets to interpret on student and school performance since the results are limited as snapshots of the past. Student and school performance is best understood in looking at the timely combination of all streams of data. To do otherwise is to look at snapshots that are narrow, unchangeable, and, like many of those photos snapped in the British Museum, overexposed.

The fiction selected for standardized testing is notorious for its singular ability not to challenge; these stories do not challenge political or religious beliefs, and  I have long suspected they are selected because they do not challenge academically.
My state of Connecticut has had great success locating and incorporating some of the blandest stories ever written for teens to use in the “Response to Literature” section of the Connecticut Academic Performance Test (CAPT).
The CAPT was first administered to students in grade 10 in the spring of 1994, and the quality of the “literature” has less than challenging. For example:
  • Amanda and the Wounded Birds: A radio psychologist is too busy to notice the needs of her teen-age daughter;
  • A Hundred Bucks of Happy: An unclearly defined narrator finds a $100 bill and decides to share the money with his/her family (but not his/her dad);
  • Catch the Moon: A young man walks a fine line between delinquency and a beautiful young woman (to be fair, there was a metaphor in this story)
At least three of the stories have included dogs:
  • Liberty-a dog cannot immigrate to the USA with his family;
  • Viva New Jersey-a lost dog makes a young immigrant feel better;
  • The Dog formally known as Victor Maximilian Bonaparte Lincoln Rothbaum– not exactly an immigrant story, but a dog emigrates from family to family in custody battle.
We are always on the lookout for a CAPT-like story of the requisite forgettable quality for practice when we came upon the story, A View from a Bridge by Cherokee Paul McDonald. The story was short, with average vocabulary, average character development, and average plot complexity. I was reminded about this one particular story last week when Sean, a former student, stopped by the school for a visit during his winter break from college.

The short story "A View from the Bridge" was used as a practice CAPT test prompt

The short story “A View from the Bridge” was used as a practice CAPT test prompt

Sean was a bright student who through his own choice remained seriously under challenged in class. For each assignment. Sean met the minimum requirement: minimum words required, minimum reading level in independent book, minimum time spent on project. I knew that Sean was more capable, but he was not going to give me the satisfaction of finding out, that is until A View from the Bridge.
The story featured a runner out for his jog who stopped on a bridge to take a break near a young boy who was fishing, his tackle nearby. After a brief conversation, the jogger realizes that the young boy was blind. The story concludes with the jogger describing a fish the blind boy had caught but could not see. At the story’s conclusion, the boy is delighted, and the jogger reaffirmed that he should help his fellow man/boy.
“The story A View from the Bridge by McDonald is the most stupid story I have ever read,” wrote Sean in essay #1 in his Initial Response to Literature.
“I mean, who lets a blind boy fish by himself on a bridge? He could fall off into the water!”
I stopped reading. How had I not thought about this?
Sean continued, “Also, fishhooks are dangerous. A blind kid could put a fishhook right into a finger. How would he get that out? A trip to the emergency room, that’s how, and emergency rooms are expensive. I know, because I had to go for stitches and the bill was over $900.00.”
Wow! Sean was “Making a Connection”, and well over his minimum word count. I was very impressed, but I had a standardized rubric to follow. Sean was not addressing the details in the story. His conclusion was strong:
“I think that  kid’s mother should be locked up!”
I was in a quandary. How could I grade his response against the standardized rubric? Furthermore, he was right. The story was ridiculous, but how many other students had seen that? How many had addressed this critical flaw in the plot ? Only Sean was demonstrating critical thinking, the other students were all writing like the trained seals we had created .
One theory of grading suggests that teachers should reward students for what they do well, regardless of a rubric.So Sean received a passing grade on this essay assignment.  There were other students who scored higher because they met the criteria, but I remember thinking how Sean’s response communicated a powerful reaction to a story beyond the demands of the standardized test. In doing so, he reminded me of the adage, “There are none so blind as those who cannot see.”