Archives For CAPT

capt As the 10th grade English teacher, Linda’s role had been to prepare students for the rigors of the State of Connecticut Academic Performance Test, otherwise known as the CAPT. She had been preparing students with exam-released materials, and her collection of writing prompts stretched back to 1994.  Now that she will be retiring, it is time to clean out the classroom. English teachers are not necessarily hoarders, but there was evidence to suggest that Linda was stocked with enough class sets of short stories to ensure  students were always more than adequately prepared. Yet, she was delighted to see these particular stories go.
“Let’s de-CAPT-itate,” we laughed and piled up the cartons containing well-worn copies of short stories.
Out went Rough Touch. Out went Machine Runner. Out went Farewell to Violet, and a View from the Bridge.
I chuckled at the contents of the box labelled”depressing stories” before chucking them onto the pile.
Goodbye to Amanda and the Wounded Birds. Farewell to A Hundred Bucks of Happy. Adios to Catch the Moon. We pulled down another carton labeled  “dog stories” containing LibertyViva New JerseyThe Dog Formally Known as Victor Maximilian Bonaparte Lincoln Rothbaum. They too were discarded without a tear.
The CAPT’s Response to Literature’s chief flaw was the ludicrous diluting of Louise Rosenblatt’s Reader Response Theory where students were asked to “make a connection:”

What does the story say about people in general?  In what ways does it remind you of people you have known or experiences you have had?  You may also write about stories or other books you have read, or movies, works of art, or television programs you have seen.

That question was difficult for many of the literal readers, who, in responding to the most obvious plot point, might answer, “This story has a dog and I have a dog.” How else to explain all the dog stories? On other occasions, I found out that while taking standardized test in the elementary grades students had been told, “if you have no connection to the story, make one up!” Over the years, the CAPT turned our students into very creative liars rather than literary analysts.

 

The other flaw in the Response to Literature  was the evaluation question. Students were asked,  

How successful was the author in creating a good piece of literature?  Use examples from the story to explain your thinking.

Many of our students found this a difficult question to negotiate, particularly if they thought the author did not write a good piece of literature, but rather an average or mildly enjoyable story. They did manage to make their opinions known, and  one of my favorite student responses began, “While this story is no  Macbeth, there are a few nice metaphors…”

Most of the literature on the CAPT did come from reputable writers, but they were not the quality stories found in anthologies like Saki’s The Interlopers or Anton Chekhov’s The Bet. To be honest, I did not think the CAPT essays were an authentic activity, and I particularly did not like the selections on the CAPT’s Response to Literature section.

Now the CAPT will be replaced by the Smarter Balanced Assessments (SBAC), as Connecticut has selected SBAC as their assessment consortium to measure progress with the Common Core State Standards, and the test will move to 11th grade. This year (2014) is the pilot test only; there are no exemplars and no results.  The SBAC is digital, and in the future we will practice taking this test on our devices, so there is no need to hang onto class sets of short stories. So why am I concerned that there will be no real difference with the SBAC? Cleaning the classroom may be a transition that is more symbolic of our move from paper to keyboard than in our gaining an authentic assessment.

Nevertheless, Linda’s classroom looked several tons lighter.

“We are finally de-CAPT-itated!” I announced looking at the stack of boxes ready for the dumpster.

“Just in time to be SBAC-kled!” Linda responded cheerfully.

March Madness is not exclusive to basketball.Screen Shot 2014-03-15 at 1.38.50 PM
March Madness signals the season for standardized testing season here in Connecticut.
March Madness signals the tip-off for testing in 23 other states as well.

All CT school districts were offered the opportunity to choose the soon-to-be-phased-out pen and paper grades 3-8 Connecticut Mastery Tests (CMT)/ grade 10 Connecticut Academic Performance Test (CAPT) OR to choose the new set of computer adaptive Smarter Balanced Tests developed by the federally funded Smarter Balanced Assessment Consortium (SBAC). Regardless of choice, testing would begin in March 2014,

As an incentive, the SBAC offered the 2014 field test as a “practice only”, a means to develop/calibrate future tests to be given in 2015, when the results will be recorded and shared with students and educators. Districts weighed their choices based on technology requirements, and many chose the SBAC field test. But for high school juniors who had completed the pen and paper CAPT in 2013, this is practice; they will receive no feedback. This 2014 SBAC field test will not count.

Unfortunately, the same can not be said for counting the 8.5 hours of testing in English/language arts and mathematics that had to be taken from 2014 academic classes. The elimination of 510 minutes of instructional time is complicated by scheduling students into computer labs with hardware that meets testing  specifications. For example, rotating students alphabetically through these labs means that academic classes scheduled during the testing windows may see students A-L one day, students M-Z on another. Additional complications arise for mixed grade classrooms or schools with block schedules. Teachers must be prepared with partial lessons or repeating lessons during the two week testing period; some teachers may miss seeing students for extended periods of time. Scheduling madness.

For years, the state standardized test was given to grade 10, sophomore students. In Connecticut, the results were never timely enough to deliver instruction to address areas of weakness during 10th grade, but they did help inform general areas of weakness in curriculum in mathematics, English/language arts, and science. Students who had not passed the CAPT had two more years to pass this graduation requirement; two more years of education were available to address specific student weaknesses.

In contrast, the SBAC is designed to given to 11th graders, the junior class. Never mind that these junior year students are preparing to sit for the SAT or ACT, national standardized tests. Never mind that many of these same juniors have opted to take Advanced Placement courses with testing dates scheduled for the first two full weeks of May. On Twitter feeds, AP teachers from New England to the mid-Atlantic are already complaining about the number of delays and school days already lost to winter weather (for us 5) and the scheduled week of spring break (for us, the third week of April) that comes right before testing for these AP college credit exams. There is content to be covered, and teachers are voicing concerns about losing classroom seat time. Madness.

Preparing students to be college and career ready through the elimination of instructional time teachers use to prepare students for college required standardized testing (SAT, ACT) is puzzling, but the taking of instructional time so students can take state mandated standardized tests that claim to measure preparedness for college and career is an exercise in circular logic. Junior students are experiencing an educational Catch 22, they are practicing for a test they will never take, a field test that does not count. More madness.

In addition, juniors who failed the CT CAPT in grade 10 will still practice with the field test in 2014. Their CAPT graduation requirement, however, cannot be met with this test, and they must still take an alternative assessment to meet district standards. Furthermore, from 2015 on, students who do not pass SBAC will not have two years to meet a state graduation requirement; their window to meet the graduation standard is limited to their senior year. Even more madness.

Now, on the eve of the inaugural testing season, a tweet from SBAC itself (3/14):

Screen Shot 2014-03-15 at 1.28.22 PM

This tweet was followed by word from CT Department of Education Commissioner Stefan Pryor’s office sent out on to superintendents from Dianna Roberge-Wentzell, DRW, that the state test will be delayed a week:

Schools that anticipated administering the Field Test during the first week of testing window 1 (March 18 – March 24) will need to adjust their schedule. It is possible that these schools might be able to reschedule the testing days to fall within the remainder of the first testing window or extend testing into the first week of window 2 (April 7 – April 11).

Education Week blogger Stephen Sawchuk provides more details in his post  Smarter Balanced Group Delays in the explanation for the delay:

The delay isn’t about the test’s content, officials said: It’s about ensuring that all the important elements, including the software and accessibility features (such as read-aloud assistance for certain students with disabilities) are working together seamlessly.

“There’s a huge amount of quality checking you want to do to make sure that things go well, and that when students sit down, the test is ready for them, and if they have any special supports, that they’re loaded in and ready to go,” Jacqueline King, a spokeswoman for Smarter Balanced, said in a March 14 interview. “We’re well on our way through that, but we decided yesterday that we needed a few more days to make sure we had absolutely done all that we could before students start to take the field tests.”

A few more days is what teachers who carefully planned alternative lesson plans during the first week of the field test probably want in order to revise their lessons. The notice that districts “might be able to reschedule” in the CT memo is not helpful for a smooth delivery of curriculum, especially since school schedules are not developed empty time slots available to accommodate “willy-nilly testing” windows. There are field trips, author visits, assemblies that are scattered throughout the year, sometimes organized years in advance. Cancellation of activities can be at best disappointing, at worst costly. Increasing madness.

Added to all this madness, is a growing “opt-out” movement for the field test. District administrators are trying to address this concern from the parents on one front and the growing concerns of educators who are wrestling with an increasingly fluid schedule. According to Sarah Darer Littman on her blog Connecticut News Junkie, the Bethel school district offered the following in a letter parents of Bethel High School students received in February:

“Unless we are able to field test students, we will not know what assessment items and performance tasks work well and what must be changed in the future development of the test . . . Therefore, every child’s participation is critical.

For actively participating in both portions of the field test (mathematics/English language arts), students will receive 10 hours of community service and they will be eligible for exemption from their final exam in English and/or Math if they receive a B average (83) or higher in that class during Semester Two.”

Field testing as community service? Madness. Littman goes on to point out that research shows that a student’s GPA is a better indicator of college success than an SAT score and suggests an exemption raises questions about a district’s value on standardized testing over student GPA, their own internal measurement. That statement may cause even more madness, of an entirely different sort.

Connecticut is not the only state to be impacted by the delay. SBAC states include: California, Delaware,  Hawaii, Idaho, Iowa, Maine, Michigan, Missouri, Montana, Nevada, New Hampshire, North Carolina, North Dakota, Oregon, Pennsylvania, South Carolina, South Dakota, U.S. Virgin Islands, Vermont, Washington, West Virginia, Wisconsin, Wyoming.

In the past, Connecticut has been called “The Land of Steady Habits,” “The Constitution State,” “The Nutmeg State.” With SBAC, we could claim that we are now a “A State of Madness,” except for the 23 other states that might want the same moniker. Maybe we should compete for the title? A kind of Education Bracketology just in time for March Madness.

This post completes a trilogy of reflections on the Connecticut Academic Performance Test (CAPT) which will be terminated once the new Smarter Balance Assessments tied to the Common Core State Standards (CCSS) are implemented. There will be at least one more year of the same CAPT assessments, specifically the Interdisciplinary Writing Prompt (IW) where 10th grade students write a persusive essay in response to news articles. While the horribly misnamed Response to Literature (RTL) prompt confuses students as to how to truthfully evaluate an story and drives students into “making stories up” in order to respond to a question, the IW shallowly addresses persuasive writing with prompts that have little academic value.

According to the CAPT Handbook (3rd Generation) on the CT State Department of Eduction’s website, the IW uses authentic nonfiction texts that have been:

“… published and are informational and persuasive, 700-1,000 words each in length, and at a 10th-grade reading level.  The texts represent varied content areas (e.g., newspaper, magazine, and online articles, journals, speeches, reports, summaries, interviews, memos, letters, reviews, government documents, workplace and consumer materials, and editorials).  The texts support both the pro and con side of the introduced issue.  Every effort is made to ensure the nonfiction texts are contemporary, multicultural, engaging, appropriate for statewide implementation, and void of any stereotyping or bias.  Each text may include corresponding maps, charts, graphs, and tables.”

Rather than teach this assessment in English, interdisciplinary writing is taught in social studies because the subject of social studies is already interdisciplinary. The big tent of social studies includes elements of economics, biography, law, statistics, theology, philosophy, geography, sociology, psychology, anthropology, political science and, of course, history. Generally, 9th and 10 grade students study the Ancient World through Modern European World (through WWII) in social studies. Some schools may offer civics in grade 10.

Social studies teachers always struggle to capture the breadth of history, usually Western Civilization, in two years. However, for 15 months before the CAPT, social studies teachers must also prepare students to write for the IW test. But does the IW reflect any of the content rich material in social studies class? No, the IW does not. Instead the IW prompt is developed on some “student centered” contemporary issue. For example, past prompts have included:

  • Should students be able to purchase chocolate milk in school?
  • Should utility companies construct wind farms in locations where windmills may impact scenery or wildlife?
  • Should ATVs be allowed in Yellowstone Park?
  • Should the school day start later?
  • Should an athlete who commits a crime be allowed to participate on a sports team?
  • Should there be random drug testing of high school students?

On the English section of the test, there are responses dealing with theme, character and plot. On the science section, the life, physical and earth sciences are woven together in a scientific inquiry. On the math section, numeracy is tested in problem-solving. In contrast to these disciplines, the social studies section, the IW, has little or nothing to do with the subject content. Students only need to write persuasively on ANY topic:

For each test, a student must respond to one task, composed of a contemporary issue with two sources representing pro/con perspectives on the issue.  The task requires a student to take a position on the issue, either pro or con.  A student must support his or her position with information from both sources.  A student, for example, may be asked to draft a letter to his or her congressperson, prepare an editorial for a newspaper, or attempt to persuade a particular audience to adopt a particular position.  The task assesses a student’s ability to respond to five assessed dimensions in relationship to the nonfiction text: (1) take a clear position on the issue, (2) support the position with accurate and relevant information from the source materials, (3) use information from all of the source materials, (4) organize ideas logically and effectively, and (5) express ideas in one’s own words with clarity and fluency.

The “authentic” portions of this test are the news articles, but the released materials illustrate that these news articles are never completely one-sided; if they are written well, they already include a counter-position.  Therefore, students are regurgitating already highly filtered arguments. Secondly, the student responses never find their way into the hands of the legislators or newspaper editors, so the responses are not authentic in their delivery. Finally, because these prompts have little to do with social studies, valuable time that could be used to improve student content knowledge of history is being lost.  Some teachers use historical content to practice writing skills, but there is always instructional time used to practice with released exam materials.

Why are students asked to argue about the length of a school day when, if presented with enough information, they could argue a position that reflects what they are learning in social studies? If they are provided the same kinds of newspaper, magazine, and online articles, journals, speeches, reports, summaries, interviews, memos, letters, reviews, government documents, workplace and consumer materials, and editorials, could students write persuasive essays with social studies content that is measurable? Most certainly. Students could argue whether they would support a government like Athens or a government like Sparta. Students could be provided brief biographies and statements of belief for different philosophers to argue who they would prefer as a teacher, DesCartes or Hegel. Students could write persuasively about which amendment of the United States Constitution they believe needs to be revisited, Amendment 10 (State’s Rights) or Amendment 27 (Limiting Changes to Congressional Pay).

How unfortunate that such forgettable issues as chocolate milk or ATVs are considered worthy of determining a student’s ability to write persuasively. How inauthentic to encourage students to write to a legislator or editor and then do nothing with the students’ opinions. How depressing to know that the time and opportunity to teach and to measure a student’s understanding of the rich content of social studies is lost every year with IW test preparation.

coffeetalkMaybe the writers of the CAPT IW prompt should have taken a lesson from the writers of Saturday Night Live with the Coffee Talk with Michael Myers. In these sketches, Myers played Linda Richmond, host of the call-in talk show “Coffee Talk”. When s(he) would become too emotional (or feclempt or pheklempt ) to talk, s(he) would “give a topic” to talk “amoungst yourselves”.  Holding back tears, waving red nails in front of his face furiously, Myers would gasp out one of the following:

“The Holy Roman Empire was neither holy, Roman, nor an empire….Discuss…”

“Franklin Delano Roosevelt’s New Deal was neither new nor a deal…. Discuss…”

“The radical reconstruction of the South was neither radical nor a reconstruction…. Discuss…”

“The internal combustion engine was neither internal nor a combustion engine…. Discuss…”

If a comedy show can come up with these academic topics for laughs, why can’t students answer them for real? At least they would understand what made the sketches funny, and that understanding would be authentic.

Screen Shot 2013-03-10 at 11.08.07 AMMarch in Connecticut brings two unpleasant realities: high winds and the state standardized tests. Specifically, the Connecticut Academic Performance Tests (CAPT) given to Grade 10th are in the subjects of math, social studies, sciences and English.

There are two tests in the English section of the CAPT to demonstrate student proficiency in reading. In one, students are given a published story of 2,000-3,000 words in length at a 10th-grade reading level. They have 70 minutes to read the story and draft four essay responses.

What is being tested is the student’s ability to comprehend, analyze, synthesize, and evaluate. While these goals are properly aligned to Bloom’s taxonomy, the entire enterprise smacks of intellectual dishonesty when “Response to Literature” is the title of this section of the test.

Literature is defined online as:

“imaginative or creative writing, especially of recognized artistic value: or writings in prose or verse; especially writings having excellence of form or expression and expressing ideas of permanent or universal interest.”

What the students read on the test is not literature. What they read is a story.

A story is defined as:

“an account of imaginary or real people and events told for entertainment.”

While the distinction may seem small at first, the students have a very difficult time responding to the last of the four questions asked in the test:

How successful was the author in creating a good piece of literature? Use examples from the story to explain your thinking.

The problem is that the students want to be honest.

When we practice writing responses to this question, we use the released test materials from previous years: “Amanda and the Wounded Birds”, “A Hundred Bucks of Happy”, “Machine Runner” or “Playing for Berlinsky”.  When the students write their responses, they are able to write they understood the story and that they can make a connection. However, many students complain the story they just read is not “good” literature.

I should be proud that the students recognize the difference. In Grades 9 & 10, they are fed a steady diet of great literature: The Odyssey, Of Mice and Men, Romeo and Juliet, All Quiet on the Western Front, Animal Farm, Oliver Twist. The students develop an understanding of characterization. They are able to tease out complex themes and identify “author’s craft”. We read the short stories “The Interlopers” by Saki, “The Sniper” by Liam O´Flaherty, or “All of Summer in a Day” by Ray Bradbury. We practice the CAPT good literature question with these works of literature. The students generally score well.

But when the students are asked to do the same for a CAPT story like the 2011 story “The Dog Formerly Known as Victor Maximilian Bonaparte Lincoln Rothbaum”, they are uncomfortable trying to find the same rich elements that make literature good. A few students will be brave enough to take on the question with statements such as:

  • “Because these characters are nothing like Lenny and George in Of Mice and Men…”
  • “I am unable to find one iota of author’s craft, but I did find a metaphor.”
  • “I am intelligent enough to know that this is not ‘literature’…”

I generally caution my students not to write against the prompt. All the CAPT released exemplars are ripe with praise for each story offered year after year. But I also recognize that calling the stories offered on the CAPT “literature” promotes intellectual dishonesty.

Perhaps the distinction between literature and story is not the biggest problem that students encounter when they take a CAPT Response to Literature. For at least one more year students will handwrite all responses under timed conditions: read a short story (30 minutes) and answer four questions (40 minutes). Digital platforms will be introduced in 2014, and that may help students who are becoming more proficient with keyboards than pencils.
But even digital platforms will not halt the other significant issue with one other question, the “Connection question (#3)” on the CAPT Response to Literature:

 What does this story say about people in general? In what ways does it remind you of people you have known or experiences you have had?  You may also write about stories or books you have read or movies, works of art, or television programs you have seen.  Use examples from the story to explain your thinking.

Inevitably, a large percentage of students write about personal experiences when they make a connection to the text. They write about “friends who have had the same problem” or “a relative who is just like” or “neighbors who also had trouble”.  When I read these in practice session, I sometimes comment to the student, “I am sorry to hear about____”.

However, the most frequent reply I get is often startling.

“No, that’s okay. I just made that up for the test.”

At least they know that their story, “an account of imaginary or real people and events told for entertainment,” is not literature, either.

The fiction selected for standardized testing is notorious for its singular ability not to challenge; these stories do not challenge political or religious beliefs, and  I have long suspected they are selected because they do not challenge academically.
My state of Connecticut has had great success locating and incorporating some of the blandest stories ever written for teens to use in the “Response to Literature” section of the Connecticut Academic Performance Test (CAPT).
The CAPT was first administered to students in grade 10 in the spring of 1994, and the quality of the “literature” has less than challenging. For example:
  • Amanda and the Wounded Birds: A radio psychologist is too busy to notice the needs of her teen-age daughter;
  • A Hundred Bucks of Happy: An unclearly defined narrator finds a $100 bill and decides to share the money with his/her family (but not his/her dad);
  • Catch the Moon: A young man walks a fine line between delinquency and a beautiful young woman (to be fair, there was a metaphor in this story)
At least three of the stories have included dogs:
  • Liberty-a dog cannot immigrate to the USA with his family;
  • Viva New Jersey-a lost dog makes a young immigrant feel better;
  • The Dog formally known as Victor Maximilian Bonaparte Lincoln Rothbaum– not exactly an immigrant story, but a dog emigrates from family to family in custody battle.
We are always on the lookout for a CAPT-like story of the requisite forgettable quality for practice when we came upon the story, A View from a Bridge by Cherokee Paul McDonald. The story was short, with average vocabulary, average character development, and average plot complexity. I was reminded about this one particular story last week when Sean, a former student, stopped by the school for a visit during his winter break from college.

The short story "A View from the Bridge" was used as a practice CAPT test prompt

The short story “A View from the Bridge” was used as a practice CAPT test prompt

Sean was a bright student who through his own choice remained seriously under challenged in class. For each assignment. Sean met the minimum requirement: minimum words required, minimum reading level in independent book, minimum time spent on project. I knew that Sean was more capable, but he was not going to give me the satisfaction of finding out, that is until A View from the Bridge.
The story featured a runner out for his jog who stopped on a bridge to take a break near a young boy who was fishing, his tackle nearby. After a brief conversation, the jogger realizes that the young boy was blind. The story concludes with the jogger describing a fish the blind boy had caught but could not see. At the story’s conclusion, the boy is delighted, and the jogger reaffirmed that he should help his fellow man/boy.
“The story A View from the Bridge by McDonald is the most stupid story I have ever read,” wrote Sean in essay #1 in his Initial Response to Literature.
“I mean, who lets a blind boy fish by himself on a bridge? He could fall off into the water!”
I stopped reading. How had I not thought about this?
Sean continued, “Also, fishhooks are dangerous. A blind kid could put a fishhook right into a finger. How would he get that out? A trip to the emergency room, that’s how, and emergency rooms are expensive. I know, because I had to go for stitches and the bill was over $900.00.”
Wow! Sean was “Making a Connection”, and well over his minimum word count. I was very impressed, but I had a standardized rubric to follow. Sean was not addressing the details in the story. His conclusion was strong:
“I think that  kid’s mother should be locked up!”
I was in a quandary. How could I grade his response against the standardized rubric? Furthermore, he was right. The story was ridiculous, but how many other students had seen that? How many had addressed this critical flaw in the plot ? Only Sean was demonstrating critical thinking, the other students were all writing like the trained seals we had created .
One theory of grading suggests that teachers should reward students for what they do well, regardless of a rubric.So Sean received a passing grade on this essay assignment.  There were other students who scored higher because they met the criteria, but I remember thinking how Sean’s response communicated a powerful reaction to a story beyond the demands of the standardized test. In doing so, he reminded me of the adage, “There are none so blind as those who cannot see.”

Is this the Age of Enlightenment? No.
Is this the Age of Reason? No.
Is this the Age of Discovery? No.

This is the Age of Measurement.

Specifically, this is the age of measurement in education where an unprecedented amount of a teacher’s time is being given over to the collection and review of data. Student achievement is being measured with multiple tools in the pursuit of improving student outcomes.

I am becoming particularly attuned to the many ways student achievement is measured as our high school is scheduled for an accreditation visit by New England Association of Schools and Colleges(NEASC) in the Spring of 2014. I am serving as a co-chair with the very capable library media specialist, and we are preparing the use of school-wide rubrics.

Several of our school-wide rubrics currently in use have been designed to complement scoring systems associated with our state tests,  the Connecticut Mastery Tests (CMT) or Connecticut Academic Performance Tests (CAPT). While we have modified the criteria and revised the language in the descriptors to meet our needs, we have kept the same number of qualitative criteria in our rubrics. For example, our reading comprehension rubric has the same two scoring criteria as does the CAPT. Where our rubric asks students to “explain”, the CAPT asks students to “interpret”. The three rating levels of our rubric are “limited”, “acceptable”, and  “excellent” while the CAPT Reading for Information ratings are “below basic”, “proficient”, and “goal”.

We have other standardized rubrics, for example, we have rubrics that mimic the six scale PSAT/SAT scoring for our junior essays, and we also have rubrics that address the nine scale Advanced Placement scoring rubric.

Our creation of rubrics to meet the scoring scales for standardized tests is not an accident. Our customized rubrics help our teachers to determine a student’s performance growth on common assessments that serve as indicators for standardized tests. Many of our current rubrics correspond to standardized test scoring scales of 3, 6, or 9 points, however, these rating levels will be soon changed.

Our reading and writing rubrics will need to be recalibrated in order to present NEASC with school-wide rubrics that measure 21st Century Learning skills; other rubrics will need to be designed to meet our topics. Our NEASC committee at school has determined that (4) four-scale scoring rubrics would be more appropriate in creating rubrics for six topics:

  • Collaboration
  • Information literacy*
  • Communication*
  • Creativity and innovation
  • Problem solving*
  • Responsible citizenship

These six scoring criteria for NEASC highlight a gap of measurement that can be created by relying on standardized tests, which directly address only three (*) of these 21st Century skills. Measuring the other 21st Century skills requires schools like ours to develop their own data stream.

Measuring student performance should require multiple metrics. Measuring student performance in Connecticut, however, is complicated by the lack of common scoring rubrics between the state standardized tests and the accrediting agency NEASC. The scoring of the state tests themselves can also be confusing as three (3) or six (6) point score results are organized into bands labelled 1-5. Scoring inequities could be exacerbated when the CMT and CAPT and similar standardized tests are used in 2013 and 2014 as 40 % of a teacher’s evaluation, with an additional 5% on whole school performance. The measurement of student performance in 21st Century skills will be addressed in teacher evaluation through the Common Core State Standards (CCSS), but these tests are currently being designed.  By 2015, new tests that measure student achievement according to the CCSS with their criteria, levels, and descriptors in new rubrics will be implemented.This emphasis on standardized tests measuring student performance with multiple rubrics has become the significant measure of student and teacher performance, a result of the newly adopted Connecticut Teacher Evaluation (SEED) program.

The consequence is that today’s classroom teachers spend a great deal of time reviewing of data that has limited correlation between standards of measurement found in state-wide tests (CMT,CAPT, CCSS) with those measurements in nation-wide tests (AP, PSAT, SAT, ACT) and what is expected in accrediting agencies (NEASC). Ultimately valuable teacher time is being expended in determining student progress across a multitude of rubrics with little correlation; yes, in simplest terms, teachers are spending a great deal of time comparing apples to oranges.

I do not believe that the one metric measurement such as Connecticut’s CMT or CAPT or any standardized test accurately reflects a year of student learning; I believe that these tests are snapshots of student performance on a given day. The goals of NEASC in accrediting schools to measure student performance with school-wide rubrics that demonstrate students performing 21st Century skills are more laudable. However, as the singular test metric has been adopted as a critical part of Connecticut’s newly adopted teacher evaluation system, teachers here must serve two masters, testing and accreditation, each with their own separate systems of measurement.

With the aggregation of all these differing data streams, there is one data stream missing. There is no data being collected on the cost in teacher hours for the collection, review, and recalibration of data. That specific stream of data would show that in this Age of Measurement, teachers have less time for /or to work with students; the kind of time that could allow teachers to engage students in the qualities from ages past: reason, discovery, and enlightenment.

GOAL -School districts want to report their students to read great literature.
GOAL-School districts want to report good reading test scores.

Unfortunately, these two goals are currently incompatible; great literature’s complexity can be challenging to read, and schools can ill afford to have students get low test scores on reading because of great literature’s complexity.

Concerns about the removal of great literature from classrooms have been raised before, but NY public school English teacher Claire Needall Hollander passionately argues how intellectually damaging this practice has become in state testing. Her  op-ed piece in 4/21/12  NYTimes Teach the Books, Touch the Heart decries the elimination of great literature in the classroom in order to incorporate practice materials to prepare students to take the standardized tests. Hollander described her role as a reading enrichment teacher as an opportunity to provide great literature as academic equity for her students. She described several of her students as  the sons and daughters of immigrants or incarcerated parents; she noted some students lived in crowded, violent, or abusive homes. Great literature, she believed, was “cultural capital” that could help her students compete against more affluent peers. However, when the lackluster data from standardized reading tests came in, she felt pressured to abandon great literature and curtailed her efforts for the majority of these students in order to teach materials prescribed for the state test.  While the reading selections on the state tests did have some syntactical complexity, she eventually decided that these reading materials lacked the literary qualities that make literature great. Texts that are “symbolic, allusive or ambiguous are more or less absent from testing materials.” Hollander writes, “It is ironic, then, that English Language exams are designed for ‘cultural neutrality.'”

In one sense, great literature is already culturally neutral. The themes or characters in a great piece of literature are not limited to one decade or one millennium. The elements that make a work of literature great can transcend culture and context, can speak to a universal audience, can be read by any tradition and still connect to a reader. Ms. Hollander’s concerns about cultural neutrality are akin to concerns about cultural acceptability. Creators of standardized tests are particularly sensitive in selecting texts that are cultural acceptable because great literature  intentionally confronts morality, questions society’s rules, or challenges tradition. Great literature gives voice to the outsider, and authors of great literature are often on the margins of society or write to unsettle the status quo. For these reasons, selections from great literature may not be considered culturally acceptable.

I have some experience on what goes onto a standardized state test as I had a seat one year as a member of the text selection committee for the reading and writing sections of the Connecticut Academic Performance Test (CAPT)  given to grade 10 students. Much time was spent reviewing materials for inclusion on a future Response to Literature exam. Out of a number of mediocre short stories, the only selection given to educators that could meet some standards of great literature was a chapter from Lois Lowry’s Number the Stars, a young adult novel that is usually read in Grade 5.  That selection was eliminated not only because of the low reading level (5.1; Lexile 670) but because the manner in which Lowry portrayed the terrifying rounding up of Jews. One committee member actually wondered aloud if Lowry could be persuaded to “reword the chapter” to address the concern. Fortunately, that debate ended with the decision that the chapter was not “acceptable” for the committee.

One problem in great literature is difficult vocabulary; for example, the simple conversations between the Man and the Boy in Cormac McCarthy’s The Road (RL 4) are interspersed with diction describing the apocalyptic setting:  “rachitic “, “miasma”, “escarpment”, “crozzled”.  Another problem is vocabulary  considered vulgar or profane that has eliminated a number of literary pieces from standardized testing and even from school libraries. According to the American Library Association (ALA) website which  lists challenges to classic literature that Hollander might teach: To Kill a Mockingbird- “contains  racial slurs”;  Of Mice and Men – “takes God’s name in vain 15 times and uses Jesus’s name lightly.” Finally,  great literature almost always contains themes that can be considered dangerous  or offensive to someone in society:  The Color Purple is “sexually graphic and violent”;  1984 is “pro-communist”; and Catcher in the Rye– is infamously “blasphemous and undermines morality.”

Engineering English language tests in order to make them culturally neutral or culturally acceptable encourages intellectual dishonesty. Take the reading section on the Connecticut Academic Performance Test (CAPT)  where every 10th grader is required to read a short story and evaluate the quality of the story, “How successful was the author in creating a good piece of literature?” in a one page essay. I have spent over 10 years preparing students for this  question on the Response to Literature standardized test, and I know how students struggle with this question. Many students do not read challenging texts outside of the classroom, limiting their experience to develop critical evaluation skills. However, the more distressing problem is that year after year, the quality of the story on the CAPT pales in comparison to the classic short stories a student could encounter in even the most limited literature anthology. Classic short stories available in the public domain by Saki, Anton Chekhov, Kate Chopin, Stephen Crane, and Jack London, to name a few, are considered too difficult for independent reading by 3rd quarter 10th grade students. Copy-write requirements or an author’s unwillingness to truncate a story to comply with a maximum word requirement or to make textual changes to make the subject palatable to a text selection committee, prevents other literary materials from being used.   As a result, more recent selections have come from Teen Ink (stories written by teens) and Boy’s Life magazine, both publications not known for superior literary content. While some stories may meet a sentence complexity standard and have been vetted for acceptable content, most lack the literary depth that should generate thoughtful critical responses to a prompt that asks about “good literature.”

To further complicate the choice a student makes in a response, released materials from previous exams used to prepare students how to respond to “How successful was the author in creating a good piece of literature?” include student responses, and all of the exemplars, good and bad, argue that the story was “good”.  The  lack of reader experience coupled with the year to year see-saw quality of the text on the exam places  students in the uncomfortable position of defending a merely average quality story as good literature; therefore, the prompt promotes intellectual dishonesty.

Perhaps the problem of including good literature on a standardized test may be addressed with the adoption of the English Language Arts Common Core State Standards where text complexity is standard #10: “By the end of grade 9, read and comprehend literature, including stories, dramas, and poems, in the grades 9–10 text complexity band proficiently, with scaffolding as needed at the high end of the range.”

In other words, the use of good literature on a CCSS English Language Arts exam might be substantively different than the texts used on the Response to Literature section of the CAPT. This could make the response about the quality of text more authentic since a complex literary text can be analyzed as “good literature.” How this more complex literary text will be used in testing, however, remains to be seen since history demonstrates that cultural opposition to a story will often trump quality.

Comprehending and evaluating a text are desirable skills, and measuring those skills will still be difficult.  Multiple choice questions are quickly corrected, but they are limited to measuring reading comprehension, and a student essay response to a complex text will require considerably more time to write and correct. Anticipating this, Hollander calls for an assessment that is more reflective of student learning:

 “Instead, we should move toward extensive written exams, in which students could grapple with literary passages and books they have read in class, along with assessments of students’ reports and projects from throughout the year. This kind of system would be less objective and probably more time-consuming for administrators, but it would also free teachers from endless test preparation and let students focus on real learning.”

The CCSS should consider Hollander’s proposal as states develop assessments.  All stakeholders should also recognize that using anything less then quality literature to measure a student reading comprehension and evaluation skill on an English/Language Arts exam is intellectually dishonest.

 The marathon of testing  is over! In the State of Connecticut, the two week window for the Connecticut Mastery Tests (CMT -elementary) and the Connecticut Academic Performance Test (CAPT grade 10) has ended, and some teachers are looking at the two week “hole” in grade books and unit plans that the intensive state testing created.

While education experts strongly advocate against “teaching to the test” and advocate the development of skills, most classroom teachers feel some obligation to prepare students for the tests by simulating at least one timed practice session for a specific test. Our state releases past testing materials for each discipline, and to be honest, our students do a fair amount of practice with these released materials before the test.

For the past two weeks, the daily school schedules have been modified to accomodate early morning testing sessions. During the school day, the lessons for students who have spent a grueling 45-90 minutes calculating or writing have been modified as well. For example, when they finally have  attended English classes, our tenth grade students have been provided silent sustained reading time for books they have independently chosen or have been watching videos to supplement a world literature unit on people in conflict.

The reading or Response to Literature test, associated with English classes,  requires students to read a short story and then write four lengthy  responses. Sadly, year after year, the quality of the story on this test pales in comparison to the classic short stories a student will encounter in even the most limited literature anthology. So we prepare students how to respond to  a question that asks “Is this good literature?” with even the most mediocre story.  Now that that the test is over,  students will begin the epic poem Beowulf, and the teachers are looking forward to having the students engage with this 8th Century grandfather of literature. We are ready for some “epic-hero-wrenching-monster’s-arm” action.

The writing or Writing Across the Disciplines test, associated with social studies, requires students to read newspaper articles about a controversial  topic, take a position on the controversial topic, and then develop a persuasive argument. There is absolutely no content from the social studies curriculum, in this case Modern World History, associated with the test. Now that this test is over, teachers can return to history content outlined in their curriculums; back to the arrival of the American forces on the shores on Iwo Jima and in the forests of the Ardennes.

Pencil and scantron testing is not an authentic practice in the world outside the classroom, but I am not against testing as a means to determine student progress; I accept that some form of testing is inevitable in education. However, the past two weeks of reading instructions (“Does everyone have two #2 pencils?”), writing in booklets (“Stop. Do not turn page”),  and racing against the clock (“You have 10 minutes left”) has taken a toll on students and faculty alike. Everyone is looking forward to the routine of a regular schedule.

Wearily, our students climbed the stairs for the last time this morning after taking the final  “supplemental” test, an extra assessment given to test materials for future test-takers. The students’ time in the testing crucible had passed; their scores will be posted during the the lazy days of summer when this experience will be nothing but a memory. Hopefully, they will have done well, and we will be pleased with the results.

Post-CAPT, there are several weeks left in the third quarter, and one full quarter after that.  Teachers can return to content without incorporating CAPT preparation with clear consciences. Our tenth graders will have the chance to read Macbeth where they will have the opportunity to create and respond to more significant questions than “Is this good literature?”  The importance of this great play placed against the activities of the past two weeks puts me in the mind  to parody Shakespeare’s famous speech-

Out, out, two weeks of testing
The CAPT is but a walking shadow, a poor player,
That struts and frets his hour upon the stage,
And then is heard no more. It is a test,
mandated by others, full of sound and fury,
Signifying nothing.