Archives For CAPT

Notice how I am trying to beat the character limit on headlines?

Here’s the translation:

For your information, Juniors: Connecticut’s Common Core State Standards Smarter Balanced Assessment [Consortium] is Dead on Arrival; Insert Scholastic Achievement Test

Yes, in the State of Connecticut, the test created through the Smarter Balanced Assessment Consortium (SBAC) based on the Common Core State Standards will be canceled for juniors (11th graders) this coming school year (2015-16) and replaced by the Scholastic Achievement Test  (SAT).

The first reaction from members of the junior class should be an enormous sigh of relief. There will be one less set of tests to take during the school year. The second sigh will come from other students, faculty members, and the administrative team for two major reasons-the computer labs will now be available year round and schedules will not have to be rearranged for testing sessions.

SAT vs. SBAC Brand

In addition, the credibility of the SAT will most likely receive more buy-in from all stakeholders. Students know what the brand SAT is and what the scores mean; students are already invested in doing well for college applications. Even the shift from the old score of 1600 (pre-2005) to 2400  with the addition of an essay has been met with general understanding that a top score is  800 in each section (math, English, or essay). A student’s SAT scores are part of a college application, and a student may take the SAT repeatedly in order to submit the highest score.

In contrast, the SBAC brand never reported student individual results. The SABC was created as an assessment for collecting data for teacher and/or curriculum evaluation. When the predictions of the percentage of anticipated failures in math and English were released, there was frustration for teachers and additional disinterest by students. There was no ability to retake, and if predictions meant no one could pass, why should students even try?

Digital TestingScantron

Moreover, while the SBAC drove the adoption of digital testing in the state in grades 3-8, most of the pre-test skill development was still given in pen and pencil format. Unless the school district consistently offered a seamless integration of 1:1 technology, there could be question as to what was being assessed-a student’s technical skills or application of background knowledge. Simply put, skills developed with pen and pencils may not translate the same on digital testing platforms.

As a side note, those who use computer labs or develop student schedules will be happy to know that SAT is not a digital test….at least not yet.

US Education Department Approved Request 

According to an early report (2006) by The Brooking’s Institute, the SBAC’s full suite of summative and interim assessments and the Digital Library on formative assessment was first estimated to cost $27.30 per student (grades 3-11). The design of the assessment would made economical if many states shared the same test.

Since that intial report, several states have left the Smarter Balanced Consortium entirely.

In May, the CT legislature voted to halt SBAC in grade ii in favor of the SAT. This switch will increase the cost of testing.According to an article (5/28/15) in the CT Mirror “Debate Swap the SAT for the Smarter Balanced Tests” :

“‘Testing students this year and last cost Connecticut $17 million’, the education department reports. ‘And switching tests will add cost,’ Commissioner of Education Dianna Wentzell said.”

This switch was approved by the U.S. Department of Education for Connecticut schools Thursday, 8/6/15, the CT Department of Education had asked it would not be penalized under the No Child Left Behind Act’s rigid requirements. Currently the switch for the SAT would not change the tests in grades 3-8; SBAC would continue at these grade levels.

Why SBAC at All?

All this begs the question, why was 11th grade selected for the SBAC in the first place? Was the initial cost a factor?

Since the 1990s, the  State of Connecticut had given the Connecticut Achievement Performance Test (CAPT) in grade 10, and even though the results were reported late, there were still two years to remediate students who needed to develop skills. In contrast, the SBAC was given the last quarter of grade 11, leaving less time to address any low level student needs. I mentioned these concerns in an earlier post: The Once Great Junior Year, Ruined by Testing.

Moving the SBAC to junior year increased the amount of testing for those electing to take the SAT with some students taking the ASVAB (Armed Services Vocational Aptitude Battery) or selected to take the NAEP (The National Assessment of Educational Progress).

There have been three years of “trial testing” for the SBAC in CT and there has been limited feedback to teachers and students. In contrast, the results from the SAT have always been available as an assessment to track student progress, with results reported to the school guidance departments.

Before No Child Left Behind, before the Common Core State Standards, before SBAC, the SAT was there. What took so them (legislature, Department of Education, etc) so long?

Every Junior Will Take the NEW SAT

Denver Post: Heller

Denver Post: Heller

In the past, not every student elected to take the SAT test, but many districts did offer the PSAT as an incentive. This coming year, the SAT will be given to every 11th grader in Connecticut.

The big wrinkle in this plan?
The SAT test has been revised (again) and will be new in March 2016.

What should we expect with this test?

My next headline?


capt As the 10th grade English teacher, Linda’s role had been to prepare students for the rigors of the State of Connecticut Academic Performance Test, otherwise known as the CAPT. She had been preparing students with exam-released materials, and her collection of writing prompts stretched back to 1994.  Now that she will be retiring, it is time to clean out the classroom. English teachers are not necessarily hoarders, but there was evidence to suggest that Linda was stocked with enough class sets of short stories to ensure  students were always more than adequately prepared. Yet, she was delighted to see these particular stories go.
“Let’s de-CAPT-itate,” we laughed and piled up the cartons containing well-worn copies of short stories.
Out went Rough Touch. Out went Machine Runner. Out went Farewell to Violet, and a View from the Bridge.
I chuckled at the contents of the box labelled”depressing stories” before chucking them onto the pile.
Goodbye to Amanda and the Wounded Birds. Farewell to A Hundred Bucks of Happy. Adios to Catch the Moon. We pulled down another carton labeled  “dog stories” containing LibertyViva New JerseyThe Dog Formally Known as Victor Maximilian Bonaparte Lincoln Rothbaum. They too were discarded without a tear.
The CAPT’s Response to Literature’s chief flaw was the ludicrous diluting of Louise Rosenblatt’s Reader Response Theory where students were asked to “make a connection:”

What does the story say about people in general?  In what ways does it remind you of people you have known or experiences you have had?  You may also write about stories or other books you have read, or movies, works of art, or television programs you have seen.

That question was difficult for many of the literal readers, who, in responding to the most obvious plot point, might answer, “This story has a dog and I have a dog.” How else to explain all the dog stories? On other occasions, I found out that while taking standardized test in the elementary grades students had been told, “if you have no connection to the story, make one up!” Over the years, the CAPT turned our students into very creative liars rather than literary analysts.


The other flaw in the Response to Literature  was the evaluation question. Students were asked,  

How successful was the author in creating a good piece of literature?  Use examples from the story to explain your thinking.

Many of our students found this a difficult question to negotiate, particularly if they thought the author did not write a good piece of literature, but rather an average or mildly enjoyable story. They did manage to make their opinions known, and  one of my favorite student responses began, “While this story is no  Macbeth, there are a few nice metaphors…”

Most of the literature on the CAPT did come from reputable writers, but they were not the quality stories found in anthologies like Saki’s The Interlopers or Anton Chekhov’s The Bet. To be honest, I did not think the CAPT essays were an authentic activity, and I particularly did not like the selections on the CAPT’s Response to Literature section.

Now the CAPT will be replaced by the Smarter Balanced Assessments (SBAC), as Connecticut has selected SBAC as their assessment consortium to measure progress with the Common Core State Standards, and the test will move to 11th grade. This year (2014) is the pilot test only; there are no exemplars and no results.  The SBAC is digital, and in the future we will practice taking this test on our devices, so there is no need to hang onto class sets of short stories. So why am I concerned that there will be no real difference with the SBAC? Cleaning the classroom may be a transition that is more symbolic of our move from paper to keyboard than in our gaining an authentic assessment.

Nevertheless, Linda’s classroom looked several tons lighter.

“We are finally de-CAPT-itated!” I announced looking at the stack of boxes ready for the dumpster.

“Just in time to be SBAC-kled!” Linda responded cheerfully.

This post completes a trilogy of reflections on the Connecticut Academic Performance Test (CAPT) which will be terminated once the new Smarter Balance Assessments tied to the Common Core State Standards (CCSS) are implemented. There will be at least one more year of the same CAPT assessments, specifically the Interdisciplinary Writing Prompt (IW) where 10th grade students write a persusive essay in response to news articles. While the horribly misnamed Response to Literature (RTL) prompt confuses students as to how to truthfully evaluate an story and drives students into “making stories up” in order to respond to a question, the IW shallowly addresses persuasive writing with prompts that have little academic value.

According to the CAPT Handbook (3rd Generation) on the CT State Department of Eduction’s website, the IW uses authentic nonfiction texts that have been:

“… published and are informational and persuasive, 700-1,000 words each in length, and at a 10th-grade reading level.  The texts represent varied content areas (e.g., newspaper, magazine, and online articles, journals, speeches, reports, summaries, interviews, memos, letters, reviews, government documents, workplace and consumer materials, and editorials).  The texts support both the pro and con side of the introduced issue.  Every effort is made to ensure the nonfiction texts are contemporary, multicultural, engaging, appropriate for statewide implementation, and void of any stereotyping or bias.  Each text may include corresponding maps, charts, graphs, and tables.”

Rather than teach this assessment in English, interdisciplinary writing is taught in social studies because the subject of social studies is already interdisciplinary. The big tent of social studies includes elements of economics, biography, law, statistics, theology, philosophy, geography, sociology, psychology, anthropology, political science and, of course, history. Generally, 9th and 10 grade students study the Ancient World through Modern European World (through WWII) in social studies. Some schools may offer civics in grade 10.

Social studies teachers always struggle to capture the breadth of history, usually Western Civilization, in two years. However, for 15 months before the CAPT, social studies teachers must also prepare students to write for the IW test. But does the IW reflect any of the content rich material in social studies class? No, the IW does not. Instead the IW prompt is developed on some “student centered” contemporary issue. For example, past prompts have included:

  • Should students be able to purchase chocolate milk in school?
  • Should utility companies construct wind farms in locations where windmills may impact scenery or wildlife?
  • Should ATVs be allowed in Yellowstone Park?
  • Should the school day start later?
  • Should an athlete who commits a crime be allowed to participate on a sports team?
  • Should there be random drug testing of high school students?

On the English section of the test, there are responses dealing with theme, character and plot. On the science section, the life, physical and earth sciences are woven together in a scientific inquiry. On the math section, numeracy is tested in problem-solving. In contrast to these disciplines, the social studies section, the IW, has little or nothing to do with the subject content. Students only need to write persuasively on ANY topic:

For each test, a student must respond to one task, composed of a contemporary issue with two sources representing pro/con perspectives on the issue.  The task requires a student to take a position on the issue, either pro or con.  A student must support his or her position with information from both sources.  A student, for example, may be asked to draft a letter to his or her congressperson, prepare an editorial for a newspaper, or attempt to persuade a particular audience to adopt a particular position.  The task assesses a student’s ability to respond to five assessed dimensions in relationship to the nonfiction text: (1) take a clear position on the issue, (2) support the position with accurate and relevant information from the source materials, (3) use information from all of the source materials, (4) organize ideas logically and effectively, and (5) express ideas in one’s own words with clarity and fluency.

The “authentic” portions of this test are the news articles, but the released materials illustrate that these news articles are never completely one-sided; if they are written well, they already include a counter-position.  Therefore, students are regurgitating already highly filtered arguments. Secondly, the student responses never find their way into the hands of the legislators or newspaper editors, so the responses are not authentic in their delivery. Finally, because these prompts have little to do with social studies, valuable time that could be used to improve student content knowledge of history is being lost.  Some teachers use historical content to practice writing skills, but there is always instructional time used to practice with released exam materials.

Why are students asked to argue about the length of a school day when, if presented with enough information, they could argue a position that reflects what they are learning in social studies? If they are provided the same kinds of newspaper, magazine, and online articles, journals, speeches, reports, summaries, interviews, memos, letters, reviews, government documents, workplace and consumer materials, and editorials, could students write persuasive essays with social studies content that is measurable? Most certainly. Students could argue whether they would support a government like Athens or a government like Sparta. Students could be provided brief biographies and statements of belief for different philosophers to argue who they would prefer as a teacher, DesCartes or Hegel. Students could write persuasively about which amendment of the United States Constitution they believe needs to be revisited, Amendment 10 (State’s Rights) or Amendment 27 (Limiting Changes to Congressional Pay).

How unfortunate that such forgettable issues as chocolate milk or ATVs are considered worthy of determining a student’s ability to write persuasively. How inauthentic to encourage students to write to a legislator or editor and then do nothing with the students’ opinions. How depressing to know that the time and opportunity to teach and to measure a student’s understanding of the rich content of social studies is lost every year with IW test preparation.

coffeetalkMaybe the writers of the CAPT IW prompt should have taken a lesson from the writers of Saturday Night Live with the Coffee Talk with Michael Myers. In these sketches, Myers played Linda Richmond, host of the call-in talk show “Coffee Talk”. When s(he) would become too emotional (or feclempt or pheklempt ) to talk, s(he) would “give a topic” to talk “amoungst yourselves”.  Holding back tears, waving red nails in front of his face furiously, Myers would gasp out one of the following:

“The Holy Roman Empire was neither holy, Roman, nor an empire….Discuss…”

“Franklin Delano Roosevelt’s New Deal was neither new nor a deal…. Discuss…”

“The radical reconstruction of the South was neither radical nor a reconstruction…. Discuss…”

“The internal combustion engine was neither internal nor a combustion engine…. Discuss…”

If a comedy show can come up with these academic topics for laughs, why can’t students answer them for real? At least they would understand what made the sketches funny, and that understanding would be authentic.

As the Connecticut State Standardized tests fade into the sunset, teachers are learning to say “Good-bye” to all those questions that ask the reader to make a personal connection to a story. The incoming  English Language Arts Common Core Standards (ELA- CCSS) are eradicating the writing of responses that begin with, “This story reminds me of…..” Those text to self, text to text, and text to world connections that students have made at each grade level are being jettisoned. The newly designed state assessment tests will tolerate no more fluff; evidence based responses only, please.

sunsetPerhaps this hard line attitude towards literacy is necessary correction. Many literacy experts had promoted connections to increase a reader’s engagement with a text. For example,

 “Tell about the connections that you made while reading the book. Tell how it reminds you of yourself, of people you know, or of something that happened in your life. It might remind you of other books, especially the characters, the events, or the setting” (Guiding Readers and Writers Grades 3-6, Fountas and Pinnell) 

Unfortunately, the question became over-used, asked for almost every book at each grade level. Of course, many students did not have similar personal experiences to make a connection with each and every text. (Note: Given some of the dark literature-vampies, zombies- that adolescents favor, not having personal experience may be a good sign!) Other students did not have enough reading experience or the sophistication to see how the themes in one text were similar to themes in another text.  Some of the state assessment exemplars revealed how students often made limited or literal connections, for example:”The story has a dog; I have a dog.”

The requirement to make a connection to each and every story eventually led to intellectual dishonesty.  Students who were unable to call to mind an authentic connection faked a relationship or an experience. Some students claimed they were encouraged by their teachers to “pretend” they knew someone just like a character they read about. “Imagine a friend had the same problem,” they were told.   Compounding this problem was the inclusion of this connection question on the state standardized tests, the CAPT (grade 10) and the CMT (grades 3-8). So, some  students traded story for story in their responses, and they became amazingly creative in answering this question. I mentioned this in a previous post when a student told me that the sick relative he had written about in a response didn’t really exist. “Don’t worry,” he said brightly after I offered my condolences, “I made that up!”

Last week, our 9th grade students took a practice standardized test with the “make a connection question” as a prompt. They still need to practice since there is one more year of this prompt before ELA CCSS assessments are in place. The students wrote their responses to a story where the relationship between a mother and daughter is very strained. One of the students wrote about her deteriorating and very difficult relationship with her mother. I was surprised to read how this student had become so depressed and upset about her relationship with her mother. I was even more surprised that afternoon when that same mother called to discuss her daughter’s grade. I hesitated a little, but I decided to share what was written in the essay as a possible explanation. The next day, I received the following e-mail,

“I told M___that I read the practice test where she said I didn’t have time to talk and other things were more important. She just laughed and said that she had nothing in common with the girl in the story so she just made that up because she had to write something. We had a good laugh over that and I felt so relieved that she didn’t feel that way.”

After reading so many student “make a connection” essays, I should have seen that coming!

Good-bye, “Make a Connection” question. Ours was an inauthentic relationship; you were just faking it.

Screen Shot 2013-03-10 at 11.08.07 AMMarch in Connecticut brings two unpleasant realities: high winds and the state standardized tests. Specifically, the Connecticut Academic Performance Tests (CAPT) given to Grade 10th are in the subjects of math, social studies, sciences and English.

There are two tests in the English section of the CAPT to demonstrate student proficiency in reading. In one, students are given a published story of 2,000-3,000 words in length at a 10th-grade reading level. They have 70 minutes to read the story and draft four essay responses.

What is being tested is the student’s ability to comprehend, analyze, synthesize, and evaluate. While these goals are properly aligned to Bloom’s taxonomy, the entire enterprise smacks of intellectual dishonesty when “Response to Literature” is the title of this section of the test.

Literature is defined online as:

“imaginative or creative writing, especially of recognized artistic value: or writings in prose or verse; especially writings having excellence of form or expression and expressing ideas of permanent or universal interest.”

What the students read on the test is not literature. What they read is a story.

A story is defined as:

“an account of imaginary or real people and events told for entertainment.”

While the distinction may seem small at first, the students have a very difficult time responding to the last of the four questions asked in the test:

How successful was the author in creating a good piece of literature? Use examples from the story to explain your thinking.

The problem is that the students want to be honest.

When we practice writing responses to this question, we use the released test materials from previous years: “Amanda and the Wounded Birds”, “A Hundred Bucks of Happy”, “Machine Runner” or “Playing for Berlinsky”.  When the students write their responses, they are able to write they understood the story and that they can make a connection. However, many students complain the story they just read is not “good” literature.

I should be proud that the students recognize the difference. In Grades 9 & 10, they are fed a steady diet of great literature: The Odyssey, Of Mice and Men, Romeo and Juliet, All Quiet on the Western Front, Animal Farm, Oliver Twist. The students develop an understanding of characterization. They are able to tease out complex themes and identify “author’s craft”. We read the short stories “The Interlopers” by Saki, “The Sniper” by Liam O´Flaherty, or “All of Summer in a Day” by Ray Bradbury. We practice the CAPT good literature question with these works of literature. The students generally score well.

But when the students are asked to do the same for a CAPT story like the 2011 story “The Dog Formerly Known as Victor Maximilian Bonaparte Lincoln Rothbaum”, they are uncomfortable trying to find the same rich elements that make literature good. A few students will be brave enough to take on the question with statements such as:

  • “Because these characters are nothing like Lenny and George in Of Mice and Men…”
  • “I am unable to find one iota of author’s craft, but I did find a metaphor.”
  • “I am intelligent enough to know that this is not ‘literature’…”

I generally caution my students not to write against the prompt. All the CAPT released exemplars are ripe with praise for each story offered year after year. But I also recognize that calling the stories offered on the CAPT “literature” promotes intellectual dishonesty.

Perhaps the distinction between literature and story is not the biggest problem that students encounter when they take a CAPT Response to Literature. For at least one more year students will handwrite all responses under timed conditions: read a short story (30 minutes) and answer four questions (40 minutes). Digital platforms will be introduced in 2014, and that may help students who are becoming more proficient with keyboards than pencils.
But even digital platforms will not halt the other significant issue with one other question, the “Connection question (#3)” on the CAPT Response to Literature:

 What does this story say about people in general? In what ways does it remind you of people you have known or experiences you have had?  You may also write about stories or books you have read or movies, works of art, or television programs you have seen.  Use examples from the story to explain your thinking.

Inevitably, a large percentage of students write about personal experiences when they make a connection to the text. They write about “friends who have had the same problem” or “a relative who is just like” or “neighbors who also had trouble”.  When I read these in practice session, I sometimes comment to the student, “I am sorry to hear about____”.

However, the most frequent reply I get is often startling.

“No, that’s okay. I just made that up for the test.”

At least they know that their story, “an account of imaginary or real people and events told for entertainment,” is not literature, either.

The fiction selected for standardized testing is notorious for its singular ability not to challenge; these stories do not challenge political or religious beliefs, and  I have long suspected they are selected because they do not challenge academically.
My state of Connecticut has had great success locating and incorporating some of the blandest stories ever written for teens to use in the “Response to Literature” section of the Connecticut Academic Performance Test (CAPT).
The CAPT was first administered to students in grade 10 in the spring of 1994, and the quality of the “literature” has less than challenging. For example:
  • Amanda and the Wounded Birds: A radio psychologist is too busy to notice the needs of her teen-age daughter;
  • A Hundred Bucks of Happy: An unclearly defined narrator finds a $100 bill and decides to share the money with his/her family (but not his/her dad);
  • Catch the Moon: A young man walks a fine line between delinquency and a beautiful young woman (to be fair, there was a metaphor in this story)
At least three of the stories have included dogs:
  • Liberty-a dog cannot immigrate to the USA with his family;
  • Viva New Jersey-a lost dog makes a young immigrant feel better;
  • The Dog formally known as Victor Maximilian Bonaparte Lincoln Rothbaum– not exactly an immigrant story, but a dog emigrates from family to family in custody battle.
We are always on the lookout for a CAPT-like story of the requisite forgettable quality for practice when we came upon the story, A View from a Bridge by Cherokee Paul McDonald. The story was short, with average vocabulary, average character development, and average plot complexity. I was reminded about this one particular story last week when Sean, a former student, stopped by the school for a visit during his winter break from college.

The short story "A View from the Bridge" was used as a practice CAPT test prompt

The short story “A View from the Bridge” was used as a practice CAPT test prompt

Sean was a bright student who through his own choice remained seriously under challenged in class. For each assignment. Sean met the minimum requirement: minimum words required, minimum reading level in independent book, minimum time spent on project. I knew that Sean was more capable, but he was not going to give me the satisfaction of finding out, that is until A View from the Bridge.
The story featured a runner out for his jog who stopped on a bridge to take a break near a young boy who was fishing, his tackle nearby. After a brief conversation, the jogger realizes that the young boy was blind. The story concludes with the jogger describing a fish the blind boy had caught but could not see. At the story’s conclusion, the boy is delighted, and the jogger reaffirmed that he should help his fellow man/boy.
“The story A View from the Bridge by McDonald is the most stupid story I have ever read,” wrote Sean in essay #1 in his Initial Response to Literature.
“I mean, who lets a blind boy fish by himself on a bridge? He could fall off into the water!”
I stopped reading. How had I not thought about this?
Sean continued, “Also, fishhooks are dangerous. A blind kid could put a fishhook right into a finger. How would he get that out? A trip to the emergency room, that’s how, and emergency rooms are expensive. I know, because I had to go for stitches and the bill was over $900.00.”
Wow! Sean was “Making a Connection”, and well over his minimum word count. I was very impressed, but I had a standardized rubric to follow. Sean was not addressing the details in the story. His conclusion was strong:
“I think that  kid’s mother should be locked up!”
I was in a quandary. How could I grade his response against the standardized rubric? Furthermore, he was right. The story was ridiculous, but how many other students had seen that? How many had addressed this critical flaw in the plot ? Only Sean was demonstrating critical thinking, the other students were all writing like the trained seals we had created .
One theory of grading suggests that teachers should reward students for what they do well, regardless of a rubric.So Sean received a passing grade on this essay assignment.  There were other students who scored higher because they met the criteria, but I remember thinking how Sean’s response communicated a powerful reaction to a story beyond the demands of the standardized test. In doing so, he reminded me of the adage, “There are none so blind as those who cannot see.”

Is this the Age of Enlightenment? No.
Is this the Age of Reason? No.
Is this the Age of Discovery? No.

This is the Age of Measurement.

Specifically, this is the age of measurement in education where an unprecedented amount of a teacher’s time is being given over to the collection and review of data. Student achievement is being measured with multiple tools in the pursuit of improving student outcomes.

I am becoming particularly attuned to the many ways student achievement is measured as our high school is scheduled for an accreditation visit by New England Association of Schools and Colleges(NEASC) in the Spring of 2014. I am serving as a co-chair with the very capable library media specialist, and we are preparing the use of school-wide rubrics.

Several of our school-wide rubrics currently in use have been designed to complement scoring systems associated with our state tests,  the Connecticut Mastery Tests (CMT) or Connecticut Academic Performance Tests (CAPT). While we have modified the criteria and revised the language in the descriptors to meet our needs, we have kept the same number of qualitative criteria in our rubrics. For example, our reading comprehension rubric has the same two scoring criteria as does the CAPT. Where our rubric asks students to “explain”, the CAPT asks students to “interpret”. The three rating levels of our rubric are “limited”, “acceptable”, and  “excellent” while the CAPT Reading for Information ratings are “below basic”, “proficient”, and “goal”.

We have other standardized rubrics, for example, we have rubrics that mimic the six scale PSAT/SAT scoring for our junior essays, and we also have rubrics that address the nine scale Advanced Placement scoring rubric.

Our creation of rubrics to meet the scoring scales for standardized tests is not an accident. Our customized rubrics help our teachers to determine a student’s performance growth on common assessments that serve as indicators for standardized tests. Many of our current rubrics correspond to standardized test scoring scales of 3, 6, or 9 points, however, these rating levels will be soon changed.

Our reading and writing rubrics will need to be recalibrated in order to present NEASC with school-wide rubrics that measure 21st Century Learning skills; other rubrics will need to be designed to meet our topics. Our NEASC committee at school has determined that (4) four-scale scoring rubrics would be more appropriate in creating rubrics for six topics:

  • Collaboration
  • Information literacy*
  • Communication*
  • Creativity and innovation
  • Problem solving*
  • Responsible citizenship

These six scoring criteria for NEASC highlight a gap of measurement that can be created by relying on standardized tests, which directly address only three (*) of these 21st Century skills. Measuring the other 21st Century skills requires schools like ours to develop their own data stream.

Measuring student performance should require multiple metrics. Measuring student performance in Connecticut, however, is complicated by the lack of common scoring rubrics between the state standardized tests and the accrediting agency NEASC. The scoring of the state tests themselves can also be confusing as three (3) or six (6) point score results are organized into bands labelled 1-5. Scoring inequities could be exacerbated when the CMT and CAPT and similar standardized tests are used in 2013 and 2014 as 40 % of a teacher’s evaluation, with an additional 5% on whole school performance. The measurement of student performance in 21st Century skills will be addressed in teacher evaluation through the Common Core State Standards (CCSS), but these tests are currently being designed.  By 2015, new tests that measure student achievement according to the CCSS with their criteria, levels, and descriptors in new rubrics will be implemented.This emphasis on standardized tests measuring student performance with multiple rubrics has become the significant measure of student and teacher performance, a result of the newly adopted Connecticut Teacher Evaluation (SEED) program.

The consequence is that today’s classroom teachers spend a great deal of time reviewing of data that has limited correlation between standards of measurement found in state-wide tests (CMT,CAPT, CCSS) with those measurements in nation-wide tests (AP, PSAT, SAT, ACT) and what is expected in accrediting agencies (NEASC). Ultimately valuable teacher time is being expended in determining student progress across a multitude of rubrics with little correlation; yes, in simplest terms, teachers are spending a great deal of time comparing apples to oranges.

I do not believe that the one metric measurement such as Connecticut’s CMT or CAPT or any standardized test accurately reflects a year of student learning; I believe that these tests are snapshots of student performance on a given day. The goals of NEASC in accrediting schools to measure student performance with school-wide rubrics that demonstrate students performing 21st Century skills are more laudable. However, as the singular test metric has been adopted as a critical part of Connecticut’s newly adopted teacher evaluation system, teachers here must serve two masters, testing and accreditation, each with their own separate systems of measurement.

With the aggregation of all these differing data streams, there is one data stream missing. There is no data being collected on the cost in teacher hours for the collection, review, and recalibration of data. That specific stream of data would show that in this Age of Measurement, teachers have less time for /or to work with students; the kind of time that could allow teachers to engage students in the qualities from ages past: reason, discovery, and enlightenment.

 The marathon of testing  is over! In the State of Connecticut, the two week window for the Connecticut Mastery Tests (CMT -elementary) and the Connecticut Academic Performance Test (CAPT grade 10) has ended, and some teachers are looking at the two week “hole” in grade books and unit plans that the intensive state testing created.

While education experts strongly advocate against “teaching to the test” and advocate the development of skills, most classroom teachers feel some obligation to prepare students for the tests by simulating at least one timed practice session for a specific test. Our state releases past testing materials for each discipline, and to be honest, our students do a fair amount of practice with these released materials before the test.

For the past two weeks, the daily school schedules have been modified to accomodate early morning testing sessions. During the school day, the lessons for students who have spent a grueling 45-90 minutes calculating or writing have been modified as well. For example, when they finally have  attended English classes, our tenth grade students have been provided silent sustained reading time for books they have independently chosen or have been watching videos to supplement a world literature unit on people in conflict.

The reading or Response to Literature test, associated with English classes,  requires students to read a short story and then write four lengthy  responses. Sadly, year after year, the quality of the story on this test pales in comparison to the classic short stories a student will encounter in even the most limited literature anthology. So we prepare students how to respond to  a question that asks “Is this good literature?” with even the most mediocre story.  Now that that the test is over,  students will begin the epic poem Beowulf, and the teachers are looking forward to having the students engage with this 8th Century grandfather of literature. We are ready for some “epic-hero-wrenching-monster’s-arm” action.

The writing or Writing Across the Disciplines test, associated with social studies, requires students to read newspaper articles about a controversial  topic, take a position on the controversial topic, and then develop a persuasive argument. There is absolutely no content from the social studies curriculum, in this case Modern World History, associated with the test. Now that this test is over, teachers can return to history content outlined in their curriculums; back to the arrival of the American forces on the shores on Iwo Jima and in the forests of the Ardennes.

Pencil and scantron testing is not an authentic practice in the world outside the classroom, but I am not against testing as a means to determine student progress; I accept that some form of testing is inevitable in education. However, the past two weeks of reading instructions (“Does everyone have two #2 pencils?”), writing in booklets (“Stop. Do not turn page”),  and racing against the clock (“You have 10 minutes left”) has taken a toll on students and faculty alike. Everyone is looking forward to the routine of a regular schedule.

Wearily, our students climbed the stairs for the last time this morning after taking the final  “supplemental” test, an extra assessment given to test materials for future test-takers. The students’ time in the testing crucible had passed; their scores will be posted during the the lazy days of summer when this experience will be nothing but a memory. Hopefully, they will have done well, and we will be pleased with the results.

Post-CAPT, there are several weeks left in the third quarter, and one full quarter after that.  Teachers can return to content without incorporating CAPT preparation with clear consciences. Our tenth graders will have the chance to read Macbeth where they will have the opportunity to create and respond to more significant questions than “Is this good literature?”  The importance of this great play placed against the activities of the past two weeks puts me in the mind  to parody Shakespeare’s famous speech-

Out, out, two weeks of testing
The CAPT is but a walking shadow, a poor player,
That struts and frets his hour upon the stage,
And then is heard no more. It is a test,
mandated by others, full of sound and fury,
Signifying nothing.

NEWLY EDITED 12/29/12:
I hate Reader Response Theory, one that considers readers’ reactions to literature as vital to interpreting the meaning of the text.

I hate how Reader Response Theory has been abused by standardized testing. Two most annoying questions for me in the Connecticut standardized testing for reading (CAPT-Response to Literature) are reader response based questions to a short story prompt:

  • CAPT #1:What are your thoughts and questions about the story? You might reflect upon the characters, their problems, the title, or other ideas in the story.
  • CAPT #4: How successful was the author in creating a good piece of literature?  Use examples from the story to explain your thinking.

After 10 years of teaching with this standardized test, I can recognize how many of my students struggle with these questions. Many lack the critical training gained from extensive reading experiences in order  to judge the quality of a text. Combine this lack of reader experience with the see-saw quality of the text on the exam year to year.  Since classic short stories such as those by Saki, Anton Chekhov, Kate Chopin, Stephen Crane, and Jack London, to name a few, are considered too difficult for independent reading by 3rd quarter 10th grade students, more contemporary selections have been used on the exam. For example, these stories in the past years have included Amanda and the Wounded Birds by Colby Rodowsky, Catch the Moon by Judith Ortiz Cofer, and a story written by Jourdan U Playing for Berlinsky published in Teen Ink. While some stories are well-written, many lack the complexity and depth that would generate thoughtful responses to a prompt that asks about “good literature.”  My students are in the uncomfortable position of defending an average quality story as good; the prompt promotes intellectual dishonesty.

So, I use a formula. I teach my students how to answer the first question by having them list their intellectual (What did you think?) and emotional (What did you feel?) reactions to the story. I have them respond by listing any predictions or questions they have about the text, and I have them summarize the plot in two short sentences. The formula is necessary because the students have only 10-15 minutes to answer this in a full page handwritten before moving to another question. The emphasis is one that is reader’s response; what does the reader think of the story rather than what did the author mean?

I teach how to answer the evaluation question much in the same way. Students measure the story against a pre-prepared set of three criteria; they judge a story’s plot, character(s) and language in order to evaluate what they determine is the quality of the story. Again, this set of criteria is developed by the student according to reader response theory, and again there is little consideration to author intent.

The newly adopted Common Core State Standards (CCSS) in Language Arts is designed differently. The  focus is back on the text; what the reader thinks is out of favor. For example, in three of the ten standards, 10th grade students are required to:

  • Analyze how complex characters (e.g., those with multiple or conflicting motivations) develop over the course of a text, interact with other characters, and advance the plot or develop the theme;
  • Determine a theme or central idea of a text and analyze in detail its development over the course of the text, including how it emerges and is shaped and refined by specific details; provide an objective summary of the text;
  • Cite strong and thorough textual evidence to support analysis of what the text says explicitly as well as inferences drawn from the text.

Please note, there is nothing in the language of the standards that asks what the student thinks or feels about the text.

In an article titled, “How Will Reading Instruction Change When Aligned to the Common Core?” on The Thomas B. Fordham Institute website (1/27/2012), Kathleen Porter-Magee  discusses the shift from the student centered response to the CCSS  “challenges to help students (and teachers) understand that reading is not about them.”

Porter-Magee  describes how David Coleman, one of the architects of the CCSS ELA standards, is promoting the close reading of texts, sometimes over extended periods of several days. The article notes that currently, “teachers often shift students’ attention away from the text too quickly by asking them what they think of what they’re reading, or how it makes them feel. Or by asking them to make personal connections to the story.” Coleman states that, “Common Core challenges us to help students (and teachers) understand that reading is not about them.” Instead, he advocates the practice of close reading, a practice that  “challenges our overemphasis on personal narrative and personal opinion in writing classrooms.”

In addition to the movement away from reader response criticism, the CCSS will be upgrading the complexity of the texts. Porter-Magee notes that,

“Of course, there’s only value in lingering on texts for so long if they’re worthy of the time—and that is why the Common Core asks students to read texts that are sufficiently complex and grade-appropriate. Yes, such texts may often push students—perhaps even to their frustration level. That is why it’s essential for teachers to craft the kinds of text-dependent questions that will help them break down the text, that will draw their attention to some of the most critical elements, and that will push them to understand (and later analyze) the author’s words.”

In other words, the quality of the texts will be substantively different than the texts used in the past on the Response to Literature section of the CAPT. This should make the response about the quality of text more authentic; a genuine complex text can be analyzed as “good literature.” How the more complex text will be used in testing, however, remains to be seen. A student trained in close reading will require more time with a complex text in generating a response.

I confess, the movement away from reader response is a move I applaud. A student’s response to a complex text is not as important in for the CCSS as what the text says or what the author intended, evidence will supplant opinion.

However, I am very aware that the momentum of the every swing of the educational pendulum brings an equal and opposite reaction. Swish! Out with reader response. Swoop! In with close reading of complex texts. Students,this swing is not about you.

Beware the Ides of March!
March Madness!
Mad as a March Hare!

Why so much warning about March?
Well, here in Connecticut, our students are preparing for the Connecticut Mastery Tests (CMT) in grades 3-8 and the Connecticut Academic Performance Test (CAPT) in grade 10 which are given every March. While every good teacher knows that “teaching to the test” is an anathema, there is always that little nagging concern that there should be a little practice in order to anticipate performance on a standardized test. So, we “practice” to the test.

In English, 10th grade students participate in a Response to Literature section of the test where they read a selected fiction story (2,000-3,000 words; RL 10th) and respond to four questions that ask:

  • a student’s initial reaction;
  • to note a character change or respond to a quote;
  • to make a connection to another story, life experience, or film;
  • to evaluate the quality of the story.

Unfortunately, an authentic practice for this test is time consuming, requiring 70 minutes which includes the reading of the story and the four essays, roughly a full hand-written page response to each question. Needless to say, our students do not like multiple practice tests for the CAPT, so developing the skills needed to pass the Response to Literature must be addressed throughout the school year.

When practice time does arrive,  students can be “deceived” into CAPT practice through technology. We have been trying two abbreviated practice approaches using our class netbooks where students actively read a text using hyperlinks or use quiz/test taking software. In these practice assessments the student responses are typed and shorter in length, but still cover the same questions. A hyperlinked test practice, including the sharing of results, can be done in one  40 minute class period.
In the first approach, we select a short story that can be read in under 15 minutes and embed questions at critical points in the text that are tied directly to the Response to Literature questions. The students then respond to these questions as they read. The easiest software to use in creating a hyperlinked text is Google Documents using the “form” option to create individual questions. Each question’s URL link can be hyperlinked at specific moments in the text. An example is seen below. Multiple choice , scale or grid question are alternate selections that can be embedded in a story in order to provide a quick snapshot of a group’s understanding by looking at the “show summary of responses” option once the assessment is complete. There are many short stories in the public domain which can be posted on a site such as Google Docs for  student access in order to not conflict with copyright laws.

The second approach uses quiz and test taking software, such as Quia, where a teacher can paste sections of the text with question posed at the end of each section. Ray Bradbury’s All of Summer in a Day  (under Creative Commons license) is one story we are currently using for CAPT practice next week; the practice test (section seen below) can be taken at

The use of hyperlinks to monitor student understanding or to practice a procedure that will be helpful in a standardized test is not difficult to implement. Teachers are able to choose the kinds of questions and the placement of questions at critical sections of a text, and students like the ability to respond as they read in short answers rather than in practice essays.

While there is nothing that can be done to stop the onslaught of tests that come in March, the embedded hyperlink provides ways to satisfy that urge to practice and still engage the students.  You can even try a hyperlink response to a text by clicking here!

See? Wasn’t that easy?