Archives For November 30, 1999

Graphic by Christopher King that accompanied the editorial piece "In Defense of Annual Testing"

Graphic by Christopher King that accompanied the editorial piece “In Defense of Annual Testing”

My Saturday morning coffee was disrupted by the headline in the New York Times opinion piece, In Defense of Annual School Testing  (2/7/15) by Chad Aldeman, an associate partner at Bellwether Education Partners, a nonprofit education research and consulting firm. Agitating me more than the caffeine in the coffee was clicking on Aldeman’s resume. Here was another a policy analyst in education, without any classroom experience, who served as an adviser to the Department of Education from 2011 to 2012. Here was another policy wonk with connections to the testing industry.

In a piece measuring less than 800 words, Aldeman contended that the “idea of less testing” in our nation’s schools, currently considered by liberals and conservative groups alike, “would actually roll back progress for America’s students.”

…annual testing has tremendous value. It lets schools follow students’ progress closely, and it allows for measurement of how much students learn and grow over time, not just where they are in a single moment.

Here is the voice of someone who has not seen students take a standardized test when, yes, they are very much in “that single moment.” That “single moment” looks different for each student. An annual test does not consider the social and emotional baggage of that “single moment” (EX: no dinner the night before; using social media or video game until 1 AM; parent separation or divorce; fight with friend, with mother, with teacher; or general text anxiety). Educators recognize that students are not always operating at optimum levels on test days. No student likes being tested at any “single moment.”

Aldeman’s editorial advocates for annual testing because he claims it prevents the kinds of tests that take a grade average results from a school. Taking a group average from a test, he notes, allows “the high performers frequently [to] mask what’s happening to low achievers.” He prefers the kinds of new tests that focus on groups of students with a level of analysis possible only with year to year measurement. That year to year is measurement on these expensive new tests is, no doubt, preferred by testing companies as a steady source of income.

His opinion piece comes at a time where the anti-test movement is growing and states are looking at the expenses of such tests. There is bipartisan agreement in the anti-test movement that states students are already being assessed enough. There are suggestions that annual testing could be limited to at specific grade levels, such as grades 3, 8, and 11, and that there are already enough assessments built into each student’s school day.

Educators engage in ongoing formative assessments (discussions, polls, homework, graphic organizers, exit slips, etc) used to inform instruction. Interim and summative assessments (quizzes/test) are used continuously to measure student performance. These multiple kinds of assessments provide teachers the feedback to measure student understanding and to differentiate instruction for all levels of students.

For example, when a teacher uses a reading running record assessment, the data collected can help determine what instruction will improve a child’s reading competency. When a teacher analyzes a math problem with a child, the teacher can assess which computational skills need to be developed or reviewed.

Furthermore, there are important measures that cannot be done by a standardized test.  Engaging students in conversations may provide insight into the  social or emotional issues that may be preventing that child’s academic performance.

Of course, the annual tests that Aldeman suggests need to be used to gain information on performance do not take up as much instructor time as the ongoing individual assessments given daily in classrooms. Testing does use manpower efficiently; one hour of testing can yield 30 student hours of results, and a teacher need not be present to administer a standardized test. Testing can diagnose each student strengths and/or weaknesses at that “single moment” in multiple areas at the same time. But testing alone cannot improve instruction, and improving instruction is what improves student performance.

In a perverse twist in logic, the allocation of funds and class time to pay for these annual tests results in a reduction of funds available to finance teachers and the number of instructional hours to improve and deliver the kind of instruction that the tests recommend. Aldeman notes that the Obama administration has invested $360 million in testing, which illustrates their choice in allocating funds to support a testing industry, not schools. The high cost of developing tests and collecting the test data results in stripping funds from state and local education budgets, and limits the financial resources for improving the academic achievement for students, many of those who Aldeman claims have “fallen through the cracks.”

His argument to continue annual testing does not refer to the obscene growth in the industry of testing, 57% in the past three years up to $2.5 billion, according to the Software & Information Industry Association. Testing now consumes the resources of every school district in the nation.

Aldeman concludes that annual testing should not be politicized, and that this time is “exactly the wrong time to accept political solutions leaving too many of our most vulnerable children hidden from view.”

I would counter that our most vulnerable children are not hidden from view by their teachers and their school districts. Sadly their needs cannot be placed “in focus” when the financial resources are reduced or even eliminated in order to fund this national obsession with testing. Aldeman’s defense is indefensible.

The Hollywood Academy released the 2015 nominations this past week, and their choices for best picture, best actor, and best director lit a firestorm on social media about the lack of diversity in their choices.Oscar

Some of the heated discussions called into question the make-up of the Academy, which according to a  2014 Los Angeles Times article is:

  • 93 percent white
  • 76 percent male
  • Average age of 63

The percentages that make up the homogenized Academy bear a striking resemblance to the make-up in the canon of literature traditionally taught in high school English classrooms, a list of works dominated by white male writers. There are numerous reasons as to why the literature is singular in gender and race: politics, economics, culture, and textbooks play a part. The most probable explanation on why the traditional canon endures, however, may be as simple as teachers teaching the books they were taught.

Even the average age of the dead white male writers in the canon is the same as those in the Academy. A sampling of traditionally assigned authors at the time of their deaths (offered in no particular order) is the average age as the members in the Academy=63 years: John Milton (72), Percy Bysshe Shelley (30), F. Scott Fitzgerald (44), Dylan Thomas (39), Arthur Miller (90), William Shakespeare (52), John Keats (27) Ernest Hemingway (62), William Faulkner (65), John Steinbeck (66) William Blake (70), George Orwell (47), and TS Eliot (77).

My observation that older white male literature dominates the curriculum is nothing new, and while there are there are glimmers of diversity, authorship bears little resemblance to readership. Occasionally, Richard Wright, Langston Hughes, and August Wilson pop up to address racial diversity, while the inclusion of Mary Shelley, Harper Lee, Jane Austen and the Bronte sisters are worthwhile contributions to gender equity.

At the same time, there is a growing body of popular young adult literature from authors representing diversity such as Jacquelyn Woodson, Sharon Draper, Pam Muñoz Ryan, Gary Soto, and Sherman Alexie.  In a manner akin to film audiences, students have been voting for these book choices with their pocketbooks or checking out library books. They are selecting materials (novels, graphic novels, animé, pop culture, biography) that they want to read.

As readers, students look for characters like themselves, who have problems like themselves, even if the settings of the stories are in the ancient past or distant future. If a student never builds empathy with a character because all the assigned reading comes from the canon, then the canon is disconnected from personal experience and useless for that student. If creating life long readers is the goal, curriculum developers must pay attention to student interests and the trends in the popular reading lists. Continuing the disconnect between the traditional canon in school and what students choose does little to build credibility.

That same kind of disconnect is seen in the nominations submitted by the Academy. Their choices show a wide gulf of opinion between critics and audiences, between the selected films and popular films at the box office. National Public Radio (NPR) film critic Bob Mondello noted the low audience numbers for many of the 2015 nominated films:

MONDELLO:  If you total up all of the grosses for all of the best picture nominees this year, you come up to about 200 million, which is roughly what a picture like “Teenage Mutant Ninja Turtles” makes all by itself so that you’re talking about very few eyeballs were on those pictures.

Mondello’s noting the difference in box office is striking in comparison to the the top three box office films to three of the nominated films for best picture:

TOP GROSSING:
1 Guardians of the Galaxy – $333,145,154
2 The Hunger Games: Mockingjay – Part 1 $330,643,639
3 Captain America: The Winter Soldier – $259,766,572

NOMINATED FOR BEST PICTURE:
94 Birdman  $26,725,993
95 The Theory of Everything $26,317,946
100. Boyhood  $24,357,447

Mondello further suggests that Academy has not supported its own self interest in making nominations:

And the idea here is that you’re not going to watch the Oscar telecast unless you have a horse in the race….And I think what they’re hoping is that the next six weeks up until the show, these movies will be seen by a lot more people. If they aren’t – and they only have 38 days to do this – then you’re going to have the lowest rated Oscars telecast in the history of the Oscars.

Encouraging people to attend the films nominated by the Academy will be a challenge, and the success of the Oscars this year will be determined by audience choice. The deaf ear of the Academy this year may make them more open to diversity in future years. In contrast, a deaf ear from curriculum developers who continue to assign literature from the canon because “it has always been taught” may result in student audiences disconnected and less interested in reading anything at all.

Hoping to bridge this disconnect are organizations such as the Children’s Book Council (CBC )Diversity Committee whose mission statement is:

We endeavor to encourage diversity of race, gender, geographical origin, sexual orientation, and class among both the creators of and the topics addressed by kid lit. We strive for a more diverse range of employees working within the industry, of authors and illustrators creating inspiring content, and of characters depicted in children’s and young adult books.

The organization We Need Diverse Books is also committed to expanding diversity in literature and in the video below, the popular YA writer Jon Green (The Fault in Our Stars, Paper Towns, Looking for Alaska) makes a compelling case for including other, newer voices into the literary canon that is taught in classrooms.

Unlike the choices made by this year’s Academy, the choices in English classroom should represent diversity in authorship, in genre, in character, and in topics because the readership is diverse. NPR’s Bob Mondello’s metaphor about engaging an audience for the Oscar show this year could be a metaphor for creating life long readers. Unless students “have a horse in the race” in what they read, they will not value the choices made for them.

Since I write to understand what I think, I have decided to focus this particular post on the different categories of assessments. My thinking has been motivated by helping teachers with ongoing education reforms that have increased demands to measure student performance in the classroom. I recently organized a survey asking teachers about a variety of assessments: formative, interim, and summative. In determining which is which, I have witnessed their assessment separation anxieties.

Therefore, I am using this “spectrum of assessment” graphic to help explain:

Screenshot 2014-06-20 14.58.50

The “bands” between formative and interim assessments and the “bands” between interim and summative blur in measuring student progress.

At one end of the grading spectrum (right) lie the high stakes summative assessments that given at the conclusion of a unit, quarter or semester. In a survey given to teachers in my school this past spring,100 % of teachers understood these assessments to be the final measure of student progress, and the list of examples was much more uniform:

  • a comprehensive test
  • a final project
  • a paper
  • a recital/performance

At the other end, lie the low-stakes formative assessments (left) that provide feedback to the teacher to inform instruction. Formative assessments are timely, allowing teachers to modify lessons as they teach. Formative assessments may not be graded, but if they are, they do not contribute many points towards a student’s GPA.

In our survey, 60 % of teachers generally understood formative assessments to be those small assessments or “checks for understanding” that let them move on through a lesson or unit. In developing a list of examples, teachers suggested a wide range of examples of formative assessments they used in their daily practice in multiple disciplines including:

  • draw a concept map
  • determining prior knowledge (K-W-L)
  • pre-test
  • student proposal of project or paper for early feedback
  • homework
  • entrance/exit slips
  • discussion/group work peer ratings
  • behavior rating with rubric
  • task completion
  • notebook checks
  • tweet a response
  • comment on a blog

But there was anxiety in trying to disaggregate the variety of formative assessments from other assessments in the multiple colored band in the middle of the grading spectrum, the area given to interim assessments. This school year, the term interim assessments is new, and its introduction has caused the most confusion with members of my faculty. In the survey, teachers were first provided a definition:

An interim assessment is a form of assessment that educators use to (1) evaluate where students are in their learning progress and (2) determine whether they are on track to performing well on future assessments, such as standardized tests or end-of-course exams. (Ed Glossary)

Yet, one teacher responding to this definition on the survey noted, “sounds an awful lot like formative.” Others added small comments in response to the question, “Interim assessments do what?”

  • Interim assessments occur at key points during the marking period.
  • Interim assessment measure when a teacher moves to the next step in the learning sequence
  • interim assessments are worth less than a summative assessment.
  • Interim assessments are given after a major concept or skill has been taught and practiced.

Many teachers also noted how interim assessments should be used to measure student progress on standards such as those in the Common Core State Standards (CCSS) or standardized tests. Since our State of Connecticut is a member of the Smarter Balanced Assessment Consortium (SBAC), nearly all teachers placed practice for this assessment clearly in the interim band.

But finding a list of generic or even discipline specific examples of other interim assessments has proved more elusive. Furthermore, many teachers questioned how many interim assessments were necessary to measure student understanding? While there are multiple formative assessments contrasted with a minimal number of summative assessments, there is little guidance on the frequency of interim assessments.  So there was no surprise when 25% of our faculty still was confused in developing the following list of examples of interim assessments:

  • content or skill based quizzes
  • mid-tests or partial tests
  • SBAC practice assessments
  • Common or benchmark assessments for the CCSS

Most teachers believed that the examples blurred on the spectrum of assessment, from formative to interim and from interim to summative. A summative assessment that went horribly wrong could be repurposed as an interim assessment or a formative assessment that was particularly successful could move up to be an interim assessment. We agreed that the outcome or the results was what determined how the assessment could be used.

Part of teacher consternation was the result of assigning category weights for each assessment so that there would be a common grading procedure using common language for all stakeholders: students, teachers, administrators, and parents. Ultimately the recommendation was to set category weights to 30% summative, 10% formative, and 60% interim in the Powerschool grade book for next year.

In organizing the discussion, and this post, I did come across several explanations on the rational or “why” for separating out interim assessments. Educator Rick DuFour emphasized how the interim assessment responds to the question, “What will we do when some of them [students] don’t learn it [content]?” He argues that the data gained from interim assessments can help a teacher prevent failure in a summative assessment given later.Screenshot 2014-06-20 16.50.15

Another helpful explanation came from a 2007 study titled “The Role of Interim Assessments in a Comprehensive Assessment System,” by the National Center for the Improvement of Educational Assessment and the Aspen Institute. This study suggested that three reasons to use interim assessments were: for instruction, for evaluation, and for prediction. They did not use a color spectrum as a graphic, but chose instead a right triangle to indicate the frequency of the interim assessment for instructing, evaluating and predicting student understanding.

I also predict that our teachers will become more comfortable with separating out the interim assessments as a means to measure student progress once they see them as part of a large continuum that can, on occasion,  be a little fuzzy. Like the bands on a color spectrum, the separation of assessments may blur, but they are all necessary to give the complete (and colorful) picture of student progress.

At the intersection of data and evaluation, here is a hypothetical scenario:Screenshot 2014-06-08 20.56.29

A young teacher meets an evaluator for a mid-year meeting.

“85 % of the students are meeting the goal of 50% or better, in fact they just scored an average of 62.5%,” the young teacher says.

“That is impressive,” the evaluator responds noting that the teacher had obviously met his goal. “Perhaps,you could also explain how the data illustrates individual student performance and not just the class average?”

“Well,” says the teacher offering a printout, “according to the (Blank) test, this student went up 741 points, and this student went up….” he continues to read from the  spreadsheet, “81points…and this student went up, um, 431 points, and…”

“So,” replies the evaluator, “these points mean what? Grade levels? Stanine? Standard score?”

“I’m not sure,” says the young teacher, looking a bit embarrassed, “I mean, I know my students have improved, they are moving up, and they are now at a 62.5% average, but…” he pauses.

“You don’t know what these points mean,” answers the evaluator, “why not?”

This teacher who tracked an upward trajectory of points was able to illustrate a trend that his students are improving, but the numbers or points his students receive are meaningless without data analysis. What doesn’t he know?

“We just were told to do the test. No one has explained anything…yet,” he admits.

There will need to be time for a great deal of explaining as the new standardized tests, Smarter Balanced Assessments (SBAC) and the Partnership for Assessment of Readiness for College and Careers (PARCC), that measure the Common Core State Standards (CCSS) are implemented over the next few years. These digital tests are part of an educational reform mandate that will require teachers at every grade level to become adept at interpreting data for use in instruction. This interpretation will require dedicated professional development at every grade level.

Understanding how to interpret data from these new standardized tests and others must be part of every teacher’s professional development plan. Understanding a test’s metrics is critical because there exists the possibility of misinterpreting results.  For example, the data in the above scenario would appear that one student (+741 points) is making enormous leaps forward while another student (+81) is lagging behind. But suppose how different the data analysis would be if the scale of measuring student performance on this particular test was organized in levels of 500 point increments. In that circumstance, one student’s improvement of +741 may not seem so impressive and a student achieving +431 may be falling short of moving up a level. Or perhaps, the data might reveal that a student’s improvement of 81 points is not minimal, because that student had already maxed out towards the top of the scale. In the drive to improve student performance, all teachers must have a clear understanding of how the results are measured, what skills are tested, and how can this information can be used to drive instruction.

Therefore, professional development must include information on the metrics for how student performance will be measured for each different test. But professional development for data analysis cannot stop at the powerpoint!   Data analysis training cannot come “canned,” especially, if the professional development is marketed by a testing company. Too often teachers are given information about testing metrics by those outside the classroom with little opportunity to see how the data can help their practice in their individual classrooms. Professional development must include the conversations and collaborations that allow teachers to share how they could use or do use data in the classroom. Such conversations and collaborations with other teachers will provide opportunities for teachers to review these test results to support or contradict data from other assessments.

Such conversations and collaborations will also allow teachers to revise lessons or units and update curriculum to address weakness exposed by data from a variety of assessments. Interpreting data must be an ongoing collective practice for teachers at every grade level; teacher competency with data will come with familiarity.

In addition, the collection of data should be on a software platform that is accessible and integrated with other school assessment programs. The collection of data must be both transparent in the collection of results and secure in protecting the privacy of each student. The benefit of technology is that digital testing platforms should be able to calculate results in a timely manner in order to free up the time teachers can have to implement changes suggested because of data analysis. Most importantly, teachers should be trained how to use this software platform.

Student data is a critical in evaluating both teacher performance and curriculum effectiveness, and teachers must be trained how to interpret rich pool of data that is coming from new standardized tests. Without the professional development steps detailed above, however, evaluation conversations in the future might sound like the response in the opening scenario:

“We just were told to do the test. No one has explained anything…yet.”

Screenshot 2014-05-31 14.25.03As the school year comes to a close, the buzzphrase is “student growth.” All stakeholders in education want to be able to demonstrate student growth, especially if student growth could be on an upwards trajectory like the graph at left.

Last week I had an opportunity to consider student growth with a different lens, and that lens was provided by a graduating senior who was preparing a presentation to a group of 7th & 8th graders.
I had assigned Steven and his classmates the task of developing  TED-like-Talks that they would give to the middle schoolers. The theme of these talks was “The Most Important Lesson I Learned in 13 Years of Education.” The talk was required  to be short (3-5 minutes), to incorporate graphics, and to make a connection between what was learned and the outside world. I asked students to come up some “profound” idea that made the lesson the most important lesson in their academic career. I gave them several periods to pitch ideas and practice.

Steven’s practice presentation was four slides long on the lesson “Phase Changes of Water.” There was a graphic on each slide that illustrated the changes of water from solid ice to liquid to vapor. The last slide illustrated the temperatures at which water underwent a change and the amount of heat energy or calories expended to make that phase change (below):

phaseplot

“What you see in this graph,” Steven explained, “is that there is a stage, a critical point, where the amount of energy needs to increase to have water change from solid to liquid. The graph shows that stage of changing from solid to liquid is shorter than the stage where the amount of energy needs to increase to change water into steam.”
He pointed to the lines on the graph, first the shorter line labeled melting and then longer line labeled vaporizing.
“So how is this a profound idea?” he asked. “Well, this chart is just like anything you might want to improve on. Sometimes you are working to go to the next level, but you hit a plateau, a critical point. You need to expend more energy for a longer period of time to get to that next level. Thank you.”

We clapped. Everyone sitting in class agreed that Steven had met the assignment. He met the time limit. He had graphics. He made a connection.
I saw something even more profound.

In less than three minutes, Steven had used what he had learned in physics to teach me a new way to consider the learning process. I could see phase changes or phase transitions to illustrate the relationship between energy expended over time and academic performance. I could relabel the side marked heat energy to a label of “energy expended over time.”  Some phase changes would be short, as in the change from ice to a liquid state. Other phase changes would be longer, as in the change from liquid to gas. Each line of phase change would be different.

For example, if I applied this idea to teaching  English grammar, some student phase changes would be short, as in a student’s use of pronouns to represent a noun. Other phase changes could be much longer, such as that same student employing noun-pronoun agreement. Time and energy would need to be expended to improve individual student performance on this task.

But whose energy is measured in this re-imagined transition? Perhaps the idea of phase changes could be used to explain how a teacher’s energy expended in instruction over time, or during a critical point, could improve academic performance. The same idea could be used to demonstrate how a student must expend additional energy at a critical point to improve understanding in order to advance to the next level.

At the end of the school year, teachers need to provide evidence of individual student growth, but perhaps a student is in a transitioning phase and growth is not yet evident?  The major variable in measuring student achievement is the length of the critical point of transition from one level to another, and that length of that critical point could extend for the length of a school year or maybe even longer. Growth may not be measured in the time provided and more energy may need to be expended.

What was so interesting to me was how Steven’s use of phase changes had given me another lens to view the students I assess and the teachers I evaluate. Because measuring academic progress is not fixed by the same physical laws where 540 calories are needed to turn 1 gram (at 100 degrees Celsius) of water to steam, each student’s graph of academic achievement (phase changes) varies. Critical points will be at different levels of achievement measured by different lengths of energy expended. Despite the wishes of teachers, administrators, and students themselves, “growth” is rarely on that 45º trajectory. Instead, growth is represented by moving up a series of stages or critical points that illustrate the amount of energy, by student and/or teacher, spent over time.

Energy matters, in physics and in student achievement. Steven’s TEDTalk gave me a new way to think about that. He was profound. I think he gets an A.

Across the pond, British students studying for the General Certificate of Secondary Education (GCSE) will no longer experience a narrative on growing up in the Jim Crowe South or reenact the witch hunts of Salem, Massachusetts, or be immersed in stories of The Great Depression’s impact on itinerant laborers. A recent decision by the United Kingdom’s Department for Education means that British students will no longer be reading Harper Lee’s To Kill a Mockingbird, Arthur Miller’s The Crucible, or John Steinbeck’s Of Mice and Men.  The Education Secretary of the United Kingdom, Michael Gove, has recommended a syllabus that favors British texts exclusively for students to “swot up” for their exam boards. In order to make more room for the strictly British diet of Eliot, Dickens, at least one of the Bronte Sisters, these 20th Century American classics are being dropped in favor of British texts. According to The Guardian, when Gove took office he told his party’s conference:,

 “The great tradition of our literature – Dryden, Pope, Swift, Byron, Keats, Shelley, Austen, Dickens and Hardy – should be at the heart of school life.”

A number of responders have noted Gove’s concern that the American texts come with “ideological baggage” that is not relevant to British students. The controversy was sparked when the new syllabus for OCR, one of the biggest UK exam boards, was released. A statement by UK’s Department for Education noted that in the revised syllabus, specific (American) books are not banned, but rather:

In the past, English literature GCSEs were not rigorous enough and their content was often far too narrow. We published the new subject content for English literature in December. It doesn’t ban any authors, books or genres. It does ensure pupils will learn about a wide range of literature, including at least one Shakespeare play, a 19th-century novel written anywhere and post-1914 fiction or drama written in the British Isles. (“To Kill a Mockingbird and Of Mice and Men axed as Gove orders more Brit lit” Guardian)

Michael Gove, Britain’s Education Secretary, recommends removing American texts

Michael Gove, Britain’s Education Secretary, recommends removing 20th C American texts by Steinbeck, Miller, and Lee.

The news is not all bad, however. Instead of considering the removal as a literary slight to the multitude of authors who write in English -but who do not serve the Crown- perhaps Americans should be grateful that those discomforting moments of U.S. History are being hidden or systematically expunged from the prying eyes of young British readers. Students do not need to be exposed to the effects of prejudice, intolerance, and poverty through the lens of an American culture when British culture already has a plethora of masterworks that focus on their own brand of bigotry, bias, and destitution. Why would British students need a global perspective as they enter the 21st Century workforce?

Consider how wonderful for Americans that Gove has eliminated the need to explain the real inditement of our judicial system through the fictional violation of Tom Robinson’s civil rights despite the dramatic evidence provided by his defense attorney, Atticus Finch. How brilliant that British students will never have the opportunity to connect the fictional fraud in the trial of John Proctor to the real terror of the McCarthy Hearings and Communist witch hunts of the 1950s. How fabulous that readers in the United Kingdom will not be forced to read how the myth of the American Dream is often unattainable, especially when a scientifically confirmed climate change associated with the Dust Bowl contributed to harsh economic realities.

So, thank you, Secretary Gove, for keeping America’s literary exposés on dirty secrets hidden. Now, a schoolchild’s positive image of an American 20th Century will not be tarnished by the likes of those upstarts Lee, Miller, and Steinbeck.

Yes, Mr. Secretary, for British students everywhere, ignorance will be bliss!

Continue Reading…

capt As the 10th grade English teacher, Linda’s role had been to prepare students for the rigors of the State of Connecticut Academic Performance Test, otherwise known as the CAPT. She had been preparing students with exam-released materials, and her collection of writing prompts stretched back to 1994.  Now that she will be retiring, it is time to clean out the classroom. English teachers are not necessarily hoarders, but there was evidence to suggest that Linda was stocked with enough class sets of short stories to ensure  students were always more than adequately prepared. Yet, she was delighted to see these particular stories go.
“Let’s de-CAPT-itate,” we laughed and piled up the cartons containing well-worn copies of short stories.
Out went Rough Touch. Out went Machine Runner. Out went Farewell to Violet, and a View from the Bridge.
I chuckled at the contents of the box labelled”depressing stories” before chucking them onto the pile.
Goodbye to Amanda and the Wounded Birds. Farewell to A Hundred Bucks of Happy. Adios to Catch the Moon. We pulled down another carton labeled  “dog stories” containing LibertyViva New JerseyThe Dog Formally Known as Victor Maximilian Bonaparte Lincoln Rothbaum. They too were discarded without a tear.
The CAPT’s Response to Literature’s chief flaw was the ludicrous diluting of Louise Rosenblatt’s Reader Response Theory where students were asked to “make a connection:”

What does the story say about people in general?  In what ways does it remind you of people you have known or experiences you have had?  You may also write about stories or other books you have read, or movies, works of art, or television programs you have seen.

That question was difficult for many of the literal readers, who, in responding to the most obvious plot point, might answer, “This story has a dog and I have a dog.” How else to explain all the dog stories? On other occasions, I found out that while taking standardized test in the elementary grades students had been told, “if you have no connection to the story, make one up!” Over the years, the CAPT turned our students into very creative liars rather than literary analysts.

 

The other flaw in the Response to Literature  was the evaluation question. Students were asked,  

How successful was the author in creating a good piece of literature?  Use examples from the story to explain your thinking.

Many of our students found this a difficult question to negotiate, particularly if they thought the author did not write a good piece of literature, but rather an average or mildly enjoyable story. They did manage to make their opinions known, and  one of my favorite student responses began, “While this story is no  Macbeth, there are a few nice metaphors…”

Most of the literature on the CAPT did come from reputable writers, but they were not the quality stories found in anthologies like Saki’s The Interlopers or Anton Chekhov’s The Bet. To be honest, I did not think the CAPT essays were an authentic activity, and I particularly did not like the selections on the CAPT’s Response to Literature section.

Now the CAPT will be replaced by the Smarter Balanced Assessments (SBAC), as Connecticut has selected SBAC as their assessment consortium to measure progress with the Common Core State Standards, and the test will move to 11th grade. This year (2014) is the pilot test only; there are no exemplars and no results.  The SBAC is digital, and in the future we will practice taking this test on our devices, so there is no need to hang onto class sets of short stories. So why am I concerned that there will be no real difference with the SBAC? Cleaning the classroom may be a transition that is more symbolic of our move from paper to keyboard than in our gaining an authentic assessment.

Nevertheless, Linda’s classroom looked several tons lighter.

“We are finally de-CAPT-itated!” I announced looking at the stack of boxes ready for the dumpster.

“Just in time to be SBAC-kled!” Linda responded cheerfully.

Screen Shot 2014-04-06 at 11.16.51 AMNot so long ago, 11th grade was a great year of high school. The pre-adolescent fog had lifted, and the label of “sophomore,” literally “wise-fool,” gave way to the less insulting “junior.” Academic challenges and social opportunities for 16 and 17 years olds increased as students sought driver’s permits/licenses, employment or internships in an area of interest. Students in this stage of late adolescence could express interest in their future plans, be it school or work.

Yet, the downside to junior year had always been college entrance exams, and so, junior year had typically been spent in preparation for the SAT or ACT. When to take these exams had always been up to the student who paid a base price $51/SAT or $36.50/ACT for the privilege of spending hours testing in a supervised room and weeks in anguish waiting for the results. Because a college accepts the best score, some students could choose to take the test many times as scores generally improve with repetition.

Beginning in 2015, however, junior students must prepare for another exam in order to measure their learning using the Common Core State Standards (CCSS). The two federally funded testing consortiums, Smarter Balanced Assessments (SBAC) or the Partnership for Assessment of Readiness for College and Careers (PARCC) have selected 11th grade to determine the how college and career ready a student is in English/Language Arts and Math.

The result of this choice is that 11th grade students will be taking the traditional college entrance exam (SAT or ACT) on their own as an indicator of their college preparedness. In addition, they will take another state-mandated exam, either the SBAC or the PARRC, that also measures their college and career readiness. While the SAT or ACT is voluntary, the SBAC or PARRC will be administered during the school day, using 8.5 hours of instructional time.

Adding to these series of tests lined up for junior year are the Advanced Placement exams. There are many 11th grade students who opt to take Advanced Placement courses in a variety of disciplines either to gain college credit for a course or to indicate to college application officers an academic interest in college level material. These exams are also administered during the school day during the first weeks of May, each taking 4 hours to complete.

One more possible test to add to this list might be the Armed Services Vocational Aptitude Battery (ASVAB test) which, according to the website Today’s Military,  is given to more than half of all high schools nationwide to students in grade 10th, 11th or 12th, although 10th graders cannot use their scores for enlistment eligibility.

The end result is that junior year has gradually become the year of testing, especially from the months of March through June, and all this testing is cutting into valuable instructional time. When students enter 11th grade, they have completed many pre-requisites for more advanced academic classes, and they can tailor their academic program with electives, should electives be offered. For example, a student’s success with required courses in math and science can inform his or her choices in economics, accounting, pre-calculus, Algebra II, chemistry, physics, or Anatomy and Physiology. Junior year has traditionally been a student’s greatest opportunity to improve a GPA before making college applications, so time spent learning is valuable. In contrast, time spent in mandated testing robs each student of classroom instruction time in content areas.

In taking academic time to schedule exams, schools can select their exam (2 concurrent) weeks for performance and non-performance task testing.  The twelve week period (excluding blackout dates) from March through June is the nationwide current target for the SBAC exams, and schools that choose an “early window” (March-April) will lose instructional time before the Advanced Placement exams which are given in May. Mixed (grades 11th & 12th) Advanced Placement classes will be impacted during scheduled SBACs as well because teachers can only review past materials instead of progressing with new topics in a content area. Given these circumstances, what district would ever choose an early testing window?  Most schools should opt for the “later window” (May) in order to allow 11th grade AP students to take the college credit exam before having to take (another) exam that determines their college and career readiness. Ironically, the barrage of tests that juniors must now complete to determine their “college and career readiness” is leaving them with less and less academic time to become college and career ready.

Perhaps the only fun remaining for 11th graders is the tradition of the junior prom. Except proms are usually held between late April and early June, when -you guessed it- there could be testing.

Opening speeches generally start with a “Welcome.”
Lucy Calkins started the 86th Saturday Reunion, March 22, 2014, at Teacher’s College with a conjunction.

“And this is the important thing” she addressed the crowd that was filling up the rows in the Riverside Cathedral, “the number of people who are attending has grown exponentially. This day is only possible with the goodwill of all.”

Grabbing the podium with both hands, and without waiting for the noise to die down, Calkins launched the day as if she was completing a thought she had from the last Saturday Reunion.

“We simply do not have the capacity to sign you up for workshops and check you in. We all have to be part of the solution.”

She was referring to the  workshops offered free of charge to educators by all Teachers College Reading and Writing Project (TCRWP) staff developers at Columbia University. This particular Saturday, there were over 125 workshops advertised on topic such as “argument writing, embedding historical fiction in nonfiction text sets, opinion writing for very young writers, managing workshop instruction, aligning instruction to the CCSS, using performance assessments and curriculum maps to ratchet up the level of teaching, state-of-the-art test prep, phonics, and guided reading.”

“First of all, ” she chided, “We cannot risk someone getting hit by a car.” Calkin’s concerns are an indication that the Saturday Reunion workshop program is a victim of its own success. The thousands of teachers disembarking from busses, cars, and taxis were directed by TCRWP minions to walk on sidewalks, wait at crosswalks, and “follow the balloons” to the Horace Mann building or Zankel Hall.

“Cross carefully,” she scolded in her teacher voice, “and be careful going into the sessions,” she continued, “the entrances to the larger workshops are the center doors, the exits are to the sides. We can’t have 800 people going in and out the same way.”

Safety talk over, Calkins turned her considerable energy to introducing a new collaborative venture, a website where educators can record their first hand experiences with the Common Core State Standards and Smarter Balanced Assessments (SBAC) or the Partnership for Assessment of Readiness for College and Careers (PARCC) testing.

And, as unbelievable as this sounds, Calkins admitted that, sometimes, “I get afraid to talk out.”
That is why, she explained, she has joined an all-star cast of educators (including Diane Ravitch, Kylene Beers, Grant Wiggins, Robert Marzano, Anthony Cody, Kathy Collins, Jay McTighe, David Pearson, Harvey “Smokey” Daniels and others-see below) in organizing a website where the voices of educators with first hand experience with standardized testing can document their experiences. The site is called Testing Talkhttp://testingtalk.org/) The site’s message on the home page states:

This site provides a space for you to share your observations of the new breed of standardized tests. What works? What doesn’t? Whether your district is piloting PARCC, Smarter Balanced, or its own test, we want to pass the microphone to you, the people closest to the students being tested. The world needs to hear your stories, insights, and suggestions. Our goal is collective accountability and responsiveness through a national, online conversation.

Screenshot 2014-03-31 21.56.01 Calkin’s promotion was directed to educators, “This will be a site for you to record your experience with testing, not to rant.” She noted that as schools “are spending billions, all feedback on testing should be open and transparent.” 

Winding down Calkins looked up from her notes. “You will all be engaged,” she promised. “Enter comments; sign your name,” she urged before closing with the final admonishment, “Be brave.”

Continue Reading…

I believe the author Stephen King would hate the language of the Common Core State Standards for one reason: unnecessary adverbs. His book On Writing has a section devoted to explaining why The adverb is not your friend.

Adverbs … are words that modify verbs, adjectives, or other adverbs. They’re the ones that usually end in -ly. Adverbs, like the passive voice, seem to have been created with the timid writer in mind. … With adverbs, the writer usually tells us he or she is afraid he/she isn’t expressing himself/herself clearly, that he or she is not getting the point or the picture across.

I have written about King and adverbs before. As I am implementing the standards in my high school English curriculums, I find myself agreeing with him.  Take, for example, the Common Core Anchor Reading Standard 1. The standard states:

Read closely to determine what the text says explicitly and to make logical inferences from it; cite specific textual evidence when writing or speaking to support conclusions drawn from the text. CCSS.ELA-LITERACY.CCRA.R.1

The use of adverbs in this standard has led to more confusion, not less. The expression “read closely” was recoined as “close reading,” and that has resulted in parodies of teachers holding books up to their faces, mocking the standard. Why the writers of the Common Core felt the need to modify the action verb “read” at all is perplexing. Students must read to determine what a text says. That is all. The admonishment to “read closely” to determine what the “text says explicitly” infers the author is either trying to slip an idea past a reader or the author has been ineffective in communicating the idea. I am not convinced any author would appreciate this standard.

Moreover, the Common Core Anchor Writing Standards have the same problem, for example,

Write informative/explanatory texts to examine and convey complex ideas and information clearly and accurately through the effective selection, organization, and analysis of content. CCSS.ELA-LITERACY.CCRA.W.2

I believe that every teacher requires students to convey “complex ideas and information clearly and accurately,” yet the language of this standard infers that students would be allowed to write distorted or inaccurate responses. The standard should read, “Write informative/explanatory texts to examine and convey complex ideas and information through the effective selection, organization, and analysis of content.” The adverbs are redundant, as King demonstrates in On Writing: (bolded words his choice)

Consider the sentence He closed the door firmly. It’s by no means a terrible sentence (at least it’s got an active verb going for it), but ask yourself if firmly really has to be there. You can argue that it expresses a degree of difference between He closed the door and He slammed the door, and you’ll get no argument from me … but what about context? What about all the enlightening (not to say emotionally moving) prose which came before He closed the door firmly? Shouldn’t this tell us how he closed the door? And if the foregoing prose doestell us, isn’t firmly an extra word? Isn’t it redundant?

The same editing should be applied to the Speaking and Listening Anchor Standards:

Prepare for and participate effectively in a range of conversations and collaborations with diverse partners, building on others’ ideas and expressing their own clearly and persuasively. CCSS.ELA-LITERACY.CCRA.SL.1

In this standard, the subjective nature of the adverb “effectively” creates the same confusion as reading “closely.” This standard could be made measurable if the emphasis was on the infinitive “to persuade” rather than on the timid adverbs “effectively” and “persuasively.”  How does one measure these terms, unless by degrees? An argument is either effective or not. Readers are persuaded or not. A standard is unequivocal. The present wording could lead to much equivocating if a reader has to determine the degree of “effectively” or “persuasively.” Try this rewrite: Prepare for and participate in a range of conversations and collaborations with diverse partners, building on others’ ideas in order to persuade.”

In addition, the Language (or grammar standards) themselves contain a distracting adverbial phrase:

Apply knowledge of language to understand how language functions in different contexts, to make effective choices for meaning or style, and to comprehend more fully when reading or listening.CCSS.ELA-LITERACY.CCRA.L.3

The phrase comprehend”more fully” sounds like a phrase from one of my student’s essays. I would equate the construct of “more fully” with “as a whole” or “the fact that” or the ubiquitous word “flows” found in my weaker writers’ responses. These are all phrases that receive a large NO! in red ink from me as I grade or confer. A reader comprehends or a reader does not.

King argues that writers must be deliberate in stemming adverbs in this selection from On Writing:

Someone out there is now accusing me of being tiresome and anal-retentive. I deny it. I believe the road to hell is paved with adverbs, and I will shout it from the rooftops. To put it another way, they’re like dandelions. If you have one on your lawn, it looks pretty and unique. If you fail to root it out, however, you find five the next day . . . fifty the day after that . . . and then, my brothers and sisters, your lawn is totally, completely, and profligately covered with dandelions. By then you see them for the weeds they really are, but by then it’s — GASP!! — too late.

King is proved correct about the propagation of adverbs in the language of the Common Core. Adverbs pop up in the NOTES ON sections that follow the anchor standards. For example:

Notes on Range and Content of Student Reading

Students can only gain this foundation when the curriculum is intentionally and coherently structured to develop rich content knowledge within and across grades. Students also acquire the habits of reading independently and closely, which are essential to their future success.

Notes on Range and Content of Student Speaking and Listening

Digital texts confront students with the potential for continually updated content and dynamically changing combinations of words, graphics, images, hyperlinks, and embedded video and audio.

When a reader removes the bolded words, the pedantic tone disappears. The implications that curriculum is “unintentional” or “unstructured” is removed. The confusion as to what reading “closely” means is removed. Don’t even get me started as to why “dynamically” is there, although I suspect the use is to suggest there may be some form of cool media out there that does not yet exist so the CCSS writers modified the adjective “changing” on combination with “dynamically” to cover future media constructs. The only adverb in this section that needs to be included is “independently,” and that should be an adjective. We all want independent readers, so be clear and say “independent readers.”

Stephen King has had an impact on my writing, and when I come to including an adverb I pause to think if that adverb is necessary. Would that the writers of the Common Core felt the same. The standards are riddled with adverbs. How did I find most of them? I used the “Find” option (command-F on my Mac) and put “LY” in the search box. How did I know that most adverbs end with ly?  Here, for your enjoyment, is my favorite adverb resource, a video from Schoolhouse Rock with the charming song about adverbs that remains emblazoned on my brain:

Consider how the advice from King and the lesson from this video can be used by teachers in stopping the flood of adverbs and in applying the Speaking and Listening standard in a classroom where “Digital texts confront students with the potential for updated content and changing combinations of words”….. NOTE: continually and dynamically not included.