Archives For November 30, 1999

 

This April 1865 photo provided by the Library of Congress shows President Abraham Lincoln\'s box at Ford\'s Theater, the site of his assassination. Under the headline "Great National Calamity!" the AP reported Lincoln’s assassination, on April 15, 1865. (AP Photo/Library of Congress)

This April 1865 photo provided by the Library of Congress shows President Abraham Lincoln\’s box at Ford\’s Theater, the site of his assassination. Under the headline “Great National Calamity!” the AP reported Lincoln’s assassination, on April 15, 1865. (AP Photo/Library of Congress)

News stories are generally written in what is commonly known as the inverted pyramid style, in which the opening paragraph features the “5 Ws” of journalism: Who, What, Where, When, Why, and How. The reason for this style is so that the reader gets the most important information up front. Given the amount of time readers have today to read the amount of news generated in a 24 hour news cycle, the inverted pyramid makes sense.

In contrast, 150 years ago a dispatch by the Associated Press took a storytelling approach  when President Abraham Lincoln’s assassination at the hands of John Wilkes Booth was relayed by AP correspondent Lawrence Gobright. Under the headline”Great National Calamity!” he chose to deliver gently the monumental news of Lincoln’s death in paragraph 9:

The surgeons exhausted every effort of medical skill, but all hope was gone.

The Common Core State Standards in Literacy promotes primary source documents, such as this news release, in English Language Arts and Social Studies. Documents like this provide students an opportunity to consider the voice or point-of-view of a writer within a historical context.

In this 19th Century AP news release, an editor’s note attached described in vivid detail Gobright’s efforts to gain first-hand information in compiling the story of Lincoln’s assassination. In the tumult that followed the assassination, Gobright became more than a witness as he:

scrambled to report from the White House, the streets of the stricken capital, and even from the blood-stained box at Ford’s Theatre, where, in his memoir he reports he was handed the assassin’s gun and turned it over to authorities.

This circa 1865-1880 photograph provided by the Library of Congress' Brady-Handy Collection shows Lawrence A. Gobright, the Associated Press' first Washington correspondent. A native of Hanover, Pa., Gobright covered both inaugurations of Abraham Lincoln, the Civil War and Lincoln's assassination during a career spanning more than a third of a century in Washington. Under the headline "Great National Calamity!" the AP reported President Abraham Lincoln’s assassination, on April 15, 1865. (AP Photo/Library of Congress)

This circa 1865-1880 photograph provided by the Library of Congress’ Brady-Handy Collection shows Lawrence A. Gobright, the Associated Press’ first Washington correspondent.. Under the headline “Great National Calamity!” the AP reported President Abraham Lincoln’s assassination, on April 15, 1865. (AP Photo/Library of Congress)

Gobright’s opening line for the news story identified the setting as Ford’s Theatre; he then added information of considerable interest to the Union Army, that:

It was announced in the papers that Gen. Grant would also be present, but that gentleman took the late train of cars for New Jersey.

After setting up who was or was not in attendance,  Gobright detailed the sequence of events in paragraph 3:

During the third act and while there was a temporary pause for one of the actors to enter, a sharp report of a pistol was heard, which merely attracted attention, but suggested nothing serious until a man rushed to the front of the President’s box, waving a long dagger in his right hand, exclaiming, ‘Sic semper tyrannis,’

Describing the assailant’s escape on horseback, Gobright concluded the reaction of the crowd in the audience in paragraph 4 in an understatement, “The excitement was of the wildest possible description…”

His AP’s edited version online states that the report does not contain details on the second assassination report on Secretary of State William Seward. There is his reference to the other members of Lincoln’s cabinet who, after hearing about the attack on Lincoln, travelled to the deathbed:

They then proceeded to the house where the President was lying, exhibiting, of course, intense anxiety and solicitude.

As part of a 150 year memorial tribute, the AP offers two websites with Gobright’s report, the first with an edited version of the report and the second, an interactive site with graphics. The readabilty score on Gobright’s release is a grade 10.3, but with some frontloading of vocabulary (solicitude, syncope) this story can be read by students in middle school. There are passages that place the student in the moment such as:

  • There was a rush towards the President’s box, when cries were heard — ‘Stand back and give him air!’ ‘Has anyone stimulants?’
  • On an examination of the private box, blood was discovered on the back of the cushioned rocking chair on which the President had been sitting; also on the partition and on the floor.

The NYTimes reporting of the assassination, having the advantage of several hours start, did not bury the lede, or begin with details of secondary importance, offering the critical information through a series of headlines beginning with the kicker “An Awful Event”:

An Awful Event
The Deed Done at Ford’s Theatre Last Night.
THE ACT OF A DESPERATE REBEL
The President Still Alive at Last Accounts.
No Hopes Entertained of His Recovery.
Attempted Assassination of Secretary Seward.
DETAILS OF THE DREADFUL TRAGEDY.

Their six column spread allowed space for the six drop heads, or smaller secondary headlines, above that were stacked to provide an outline of the events. The article that follows begins with then Secretary of War Edwin Stanton’s message to Major General Dix, April 15, 1865 at 1:30 AM:

This evening about 9:30 PM, at Ford’s Theatre, the President while sitting in his private box, with Mrs. Lincoln, Mrs. Harris, and Major Rathburn, was shot by an assassin who suddenly entered the box and approached behind the President.

Stanton’s 324 word report has a readability grade 7.2, and includes also details about the other assassination attempt on Seward’s life:

About the same hour an assassin, whether the same or not, entered Mr. SEWARD’s apartments, and under the pretence of having a prescription, was shown to the Secretary’s sick chamber. The assassin immediately rushed to the bed, and inflicted two or three stabs on the throat and two on the face.

A second dispatch features Gobright’s reporting and appears below Stanton’s message in the second column. Following these accounts, a third dispatch  by an unnamed reporter is dated Friday, April 14, 11:15 P.M. and like Gobright’s account begins with a storybook-type lead:

A stroke from Heaven laying the whole of the city in instant ruins could not have startled us as did the word that broke from Ford’s Theatre a half hour ago that the President had been shot. It flew everywhere in five minutes, and set five thousand people in swift and excited motion on the instant.

These first-person accounts of Gobright, Stanton, and others covering Lincoln’s assassination will allow students to contrast what they recognize as the reporting styles of today with an example of the storytelling reporting style 150 years ago. Students can analyze both styles for conveying information, and then comment on impact each style may have on an audience.

More important is the opportunity to ditch the dry facts from a textbook, as these newspaper releases allow students to discover that at the heart of stories about Lincoln’s assassination, the reporters were really storytellers, and their hearts were breaking.

When Erik Larson was interviewed by the NY Times for his latest book Dead Wake about the sinking of the R.M.S. Lusitania, he Screenshot 2015-03-11 23.14.19expressed his purpose for choosing to write in the narrative non-fiction genre:

“It is not necessarily my goal to inform. It is my goal to create a historical experience with my books. My dream, my ideal, is that someone picks up a book of mine, starts reading it, and just lets themselves sink into the past and then read the thing straight through, and emerge at the end feeling as though they’ve lived in another world entirely.”

There is nothing of analysis in his stated purpose for writing, but there is a desire to have a reader engulfed by a narrative that ends in the reader “feeling.”

In contrast, in the first three anchor standards for reading (grades k-12), the Common Core State Standards (CCSS) for English Language Arts spell out the expanse between their objectives and Larson’s expression to use narrative non-fiction to connect viscerally with the readers:

CCSS.ELA-Literacy.CCRA.R.1
Read closely to determine what the text says explicitly and to make logical inferences from it; cite specific textual evidence when writing or speaking to support conclusions drawn from the text.
CCSS.ELA-Literacy.CCRA.R.2
Determine central ideas or themes of a text and analyze their development; summarize the key supporting details and ideas.
CCSS.ELA-Literacy.CCRA.R.3
Analyze how and why individuals, events, or ideas develop and interact over the course of a text.

The anchor and grade level standards were written purposely to be devoid of any reference to reader’s feeling or connection. These standards were carefully articulated not to be confused with the popular  Reader Response Theory supported by Louise Rosenblatt that focused “on the reader rather than the author or the content and form of the work.”

“Reading closely” in the CCSS has been spun as “close reading”, defined by the The Partnership for Assessment of Readiness for College and Careers (PARCC) as:

Close, analytic reading stresses engaging with a text of sufficient complexity directly and examining meaning thoroughly and methodically, encouraging students to read and reread deliberately. Directing student attention on the text itself empowers students to understand the central ideas and key supporting details. It also enables students to reflect on the meanings of individual words and sentences; the order in which sentences unfold; and the development of ideas over the course of the text, which ultimately leads students to arrive at an understanding of the text as a whole. (2011, p. 7)

Analyzing the definition of close reading (above) through analysis in a WORD SIFT highlights the CCSS emphasis on ideas and meaning for the student:Screenshot 2015-03-11 22.02.36

Missing from this definition? The word “author.”

This word sift analysis illustrates how the “close reading” advocated by the CCSS requires students to read for meaning, with no consideration to the intent of an author.

The NYTimes interview with Larson provided him the opportunity to state that he does not write to a standard; he says nothing about “meaning” and “ideas”. Instead, Larson poetically defined his goal for writing. He writes for the reader to have an experience, and that experience is ” his “dream” or “ideal.”

While the language of the Common Core contrives to eliminate the author’s role in creating texts, those same texts students will be expected to “close read”, Erik Larson reminds us that authors do not write to meet a standard.

Authors write to create feelings in their readers, whether those readers are reading closely or not.

There are advertising campaigns that successfully employ the technique of “advertised ignorance” or “false authority” where an individual proudly declares that he or she is not an expert  just before rendering an expert opinion. An example for this form of advertising was from a series of promotions for Vicks Formula 44 cough syrup starring actors who portrayed doctors on popular soap operas. Here is the 1986 TV commercial starring Peter Bergman:

This commercial was the second in a series of successful TV doctor endorsements for over the counter medicines; people responded well to taking medical advice from a celebrity who admitted he was not an expert.

The broad acceptance of this logical fallacy may explain why the creators of the Common Core State Standards (CCSS) were successful promoters.  With minimal experience as educators or certifications in K-12 education, a handful of individuals convinced the National Governors Association that a set of national achievement standards was necessary to improve education.

These “Architects of the Common Core”, David Coleman and Jason Zimba, founded The Grow Network, an internet-based consulting organization before joining with Sue Pimentel to found Student Achievement Partners (SAP), a non-profit organization which researched and developed “achievement based” assessment standards. These three were not experts in education through research or practice, but like the doctor who plays an expert on TV, they were confidently endorsing the Common Core as the cure for all of the nation’s education ills.

The exorbitant cost for their diagnosis and cure was the topic of an article that ran in The Federalist (January 2015) by   titled Ten Common Core Promoters Laughing All the Way to the Bank. The tagline:

People intimately involved with creating or pushing Common Core are making a lot of money despite having demonstrated exactly zero proven success at increasing student achievement.

In addition to Coleman, Zimba, and Pimental, the article lists other who have endorsed the Common Core State Standards (CCSS) for profit. Former New York City Schools Chancellor Joel Klein; Former New York Education Commissioner John King;  Joanne Weiss, Chief of Staff to Education Secretary Arne Duncan; Idaho State Superintendent Tom Luna; Former Education Secretary Bill Bennett; and Dane Linn, Vice President for the Business Roundtable. The lone educator William McCallum, head of the University of Arizona’s math department, has begun a nonprofit curriculum company, Illustrative Mathematics, to generate materials for Common Core.

In her article, Pullman lists the credentials for each of the ten promoters and details how much they have financially gained, or still stand to gain, for supporting the Common Core. What these ten individuals collectively lack in education experience, they make up in business acumen. Like the handsome pretend doctor in the Vicks 44 commercials, who was paid handsomely for his marketing, these quasi-educators endorsing the Common Core will reap profits whether the CCSS initiative is successful or not.

Of course the irony of this form of endorsement is that one of the key shifts in education for the English Language arts standards is that students should place an emphasis on evidence whenever they make a claim:

The Common Core emphasizes using evidence from texts to present careful analyses, well-defended claims, and clear information.

If this key shift in the CCSS had been considered when the standards were in their genesis, there might have been an emphasis on requiring evidence for the claims of these CCSS promoters. However, once the standards were announced in 2009, 44 states rapidly moved to adopt the CCSS. Many of these states were spurred on by the Race to the Top federal funding deadlines that awarded extra points to applications completed by August 2010.

The nationwide rush to adopt the standards had been spurred on by non-educators or policy wonks that represented businesses that stood to profit as state after state swallowed what has turned out to be costly, even bitter, medicine.

Whether that CCSS medicine will be effective is yet to be determined, but twelve states who had initially signed on have filed to opt out….A decision not to follow the “doctor’s” orders.

Testing a Thousand Madelyns

February 25, 2015 — 1 Comment

My niece is a beautiful little girl. She is a beautiful girl on the outside, the kind of little girl who cannot take a bad picture. She is also beautiful on the inside. She is her mother’s helper, fiercely loyal to her older brothers, and a wonderful example for her younger brother and sisters. She is the gracious hostess who makes sure you get the nicest decorated cupcake at the birthday party. She has an infectious laugh, a compassionate heart, and an amazing ability “to accessorize” her outfits. For the sake of her privacy, let’s call her Madelyn.

Two years ago, the teachers at her school, like teachers in thousands of elementary schools across the United States, prepared Madelyn and her siblings for the mandated state tests. There were regular notices sent home throughout the school year that discussed the importance of these tests. There was a “pep-test-rally” a week before the test where students made paper dolls which they decorated with their names. A great deal of time was spent getting students enthused about taking the tests.

Paper dollSeveral months later, Madelyn received her score on her 4th grade state test. She was handed her paper doll cut-out with her score laminated in big numbers across the paper doll she had made.

Madelyn was devastated.

She hated her score because she understood that her score was too low. She hid the paper doll throughout the day, and when she came home, she cried. She could not hang the paper doll on the refrigerator where her brother’s and sister’s scores hung. The scores on their paper dolls were higher.

She cried to her mother, and her mother also cried. Her mother remembered that same hurt when she had not done well on tests in school either. As they sobbed together, Madelyn told her mother, “I’m not smart.”

Now, the annual testing season is starting again. This year, there will be other students like Madelyn who will experience the hype of preparation, who will undergo weeks of struggling with tests, and then endure a form of humiliation when the results return. The administrators and teachers pressured to increase proficiency results on a state test, often forget the damage done to the students who do not achieve a high standard.

That paper doll created during the fervor of test preparation is an example of an unintended consequence; no one in charge considered how easily scores could be compared once they were available to students in so public a manner. Likewise, many stakeholders are unaware that the rallies, ice-cream parties, and award ceremonies do little to comfort those students who, for one reason or another, do not test well.

There is little consolation to offer 10-year-old students who see the results of state tests as the determiner of being “smart” because 10-year-old students believe tests are a final authority. 10-year-old students do not grasp the principles of test design that award total success to a few at the high end, and assign failure to a few at the low end, a design best represented by the bell curve, “the graphic representation showing the relative performance of individuals as measured against each other.” 10-year-old students do not understand that their 4th grade test scores are not indicators for later success.

Despite all the advances in computer adaptive testing using algorithms of one sort or another, today’s standardized tests are limited to evaluating a specific skill set; true performance based tests have not yet been developed because they are too costly and too difficult to standardized.

My niece Madelyn would excel in a true performance based task at any grade level, especially if the task involved her talents of collaboration, cooperation, and presentation. She would be recognized for the skill sets that are highly prized in today’s society: her work ethic, her creativity, her ability to communicate effectively, and her sense of empathy for others. If there were assessments and tests that addressed these particular talents, her paper doll would not bear the Scarlet Letter-like branding of a number she was ashamed to show to those who love her.

Furthermore, there are students who, unlike my niece Madelyn, do not have support from home. How these students cope with a disappointing score on a standardized test without support is unimaginable. Madelyn is fortunate to have a mother and father along with a network of people who see her all her qualities in total; she is prized more than test grades.

At the conclusion of that difficult school year, in a moment of unexpected honesty, Madelyn’s teacher pulled my sister aside.
“I wanted to speak to you, because I didn’t want you to be upset about the test scores,” he admitted to her. He continued, “I want you to know that if I could choose a student to be in my classes, I would take Madelyn…I would take a thousand Madelyns.”

It’s testing season again for a thousand Madelyns.
Each one should not be defined by a test score.

Teachers are looking to include informational text in their English Language Arts classrooms, but what about informational space?

The hard copy of the NYTimes Saturday Sports section on Saturday, July 12, 2014, was an opportunity to teach how space can be information.

full page_edited-2-1

My photo; photo also featured in Deadspin blog

The photo above shows the front page of Sports Saturday. Students can note the banner is in the same location, floating at the top of the page with teaser photos for the content inside. Under the banner and centered on the page is  a feature that is usually on the inside of the sports section, a column of player trades and transactions in the different sports leagues for the day. The column is actual size, straddling the paper’s fold and surrounded by white space. Below the fold, one transaction in the column is highlighted in bright yellow. The rest of the page is blank.LeBraun trade(22)

 

Why the single highlighted line? What was the reason for all the white space? 

The Cleveland Cavaliers signed LeBron James.
Yes, during the same week when the semi-finals and finals for the 2014 World Cup riveted millions, the only news that mattered to sports fans was a short declarative sentence, “Cleveland Cavaliers signed F James LeBron.”

That was the purpose of the white space….to provide emphasis.

The other transactions listed from Major League Baseball, National Basketball Association, and the National Hockey League, however significant in the future, were not as significant at this moment.

That was the purpose of the yellow highlighted line, “Cleveland Cavaliers signed F James LeBron.”

In determining an author’s purpose, which in this case was the layout editor’s purpose, the Common Core State Standards (CCSS) offers a methodology to have student review the craft and structure of a text. Teachers use these these standards to frame questions about the text:

English Language Arts Craft and Structure Anchor Standards

CCSS.ELA-LITERACY.CCRA.R.4
Interpret words and phrases as they are used in a text, including determining technical, connotative, and figurative meanings, and analyze how specific word choices shape meaning or tone.
CCSS.ELA-LITERACY.CCRA.R.5
Analyze the structure of texts, including how specific sentences, paragraphs, and larger portions of the text (e.g., a section, chapter, scene, or stanza) relate to each other and the whole.
CCSS.ELA-LITERACY.CCRA.R.6
Assess how point of view or purpose shapes the content and style of a text.

The front page of this Sports Saturday provides multiple opportunities to discuss the difference between denotation (what is on the page) and connotation (what is implied). In helping students to consider the craft and structure of this particular layout, a teacher could use questions based on Webb’s Depth of Knowledge (DOK) that might be:

• How would you summarize what you read in the written text? (denotation)
• How would you summarize what you see in the white space in contrast to the written text? (denotation)
• What do you notice about where the highlighted information is placed? (denotation)
• What conclusions can you draw about the layout editor’s choice to highlight only one player transaction? (connotation)
• What is your interpretation of the use of the white space ? (connotation)
• Can you formulate a theory for the layout ? (connotation)
• Can you elaborate on a reason the editor used the small font in the player transaction column for this news? (connotation)

Of course, the story of the LeBron signing was also inside the Saturday Sports section. Michael Powell wrote the feature article  Star Reconnects With a Special Place in His Heart where the news of LeBron’s return was celebrated:

“The man knows his region, and his audience, and his life. Even as the news broke on television, you could hear out your window Cleveland residents loosening more or less random whoops. Car horns beeped. Strangers exchanged bro-hugs and palm slaps” (Powell-NYTimes)

Students could read Powell’s article to extend their thinking about the impact of this one player’s return to a team he left several years ago. Then, there is LeBron’s own essay, co-authored by Lee Jenkins, in Sports Illustrated. In this essay, LeBron explains the reasons for his return:

“But this is not about the roster or the organization. I feel my calling here goes above basketball. I have a responsibility to lead, in more ways than one, and I take that very seriously. My presence can make a difference in Miami, but I think it can mean more where I’m from. I want kids in Northeast Ohio, like the hundreds of Akron third-graders I sponsor through my foundation, to realize that there’s no better place to grow up. Maybe some of them will come home after college and start a family or open a business. That would make me smile. Our community, which has struggled so much, needs all the talent it can get”  (LeBron/Jenkins Sports Illustrated).

In this essay, LeBron anticipates (and connotes) the level of commitment that will be necessary for continued success:

In Northeast Ohio, nothing is given. Everything is earned. You work for what you have” (LeBron/Jenkins Sports Illustrated).

These other two informational texts could also provide opportunities to have students practice denotation and connotation:

  • How would you summarize what you read in these written texts? (denotation)
  • What conclusion can be drawn after reading these three texts? (connotation)
  • What is your interpretation after reading these texts? Support your rationale. (denotation/connotation)

A final exercise? Have students research the cost of a full page spread in the NYTimes ($70,000 non-profit; up to $200,000 for profit). Have students discuss or make arguments on the use of white space in this layout once they know the expense of the layout editor’s choice.

The best part of these exercises is that the reader does not need to know basketball to appreciate how this information is communicated: through layout, through a feature story, and through a personal essay.  I do not follow basketball, and I am only peripherally aware of LeBron’s role in the NBA. I was intrigued, however, about the use of white space to convey information. I also considered the different size of spaces related to the text. The size of a basketball court in the NBA is  94′ by 50′ or 4700 square feet. In another measurement, LeBron has a rumored vertical leap the size of 40 inches or so (the average NBA player can jump 28 inches). Finally, the size of the NYTimes page  is 24″ x 36″ or 864 square inches.

In each case, size matters. In this context, space matters as well.

Throwbacks in education are common.

This time, Robert Pondiscio, a Senior Fellow and Vice President for External Affairs at the Thomas B. Fordham Institution is itching for a fight to reopen old “reading war” wounds. He has taken umbrage with the NYTimes (7/2/14) opinion piece Balanced Literacy Is One Effective Approach by Lucy Calkins: Director of the Teachers College Reading and Writing Project at Columbia University and a proponent of balanced literacy.

Pondiscio’s op-ed (7/3/2014) titled, Why Johnny Won’t Learn to Read charges back into the heat of that fight as he referenced the 1997 National Reading Panel’s review of studies on the teaching of reading.

In reminding everyone that “phonics won,” Pondiscio jettisons the definition of the word “balanced” in the phrase balanced literacy. The Oxford Online Dictionary states that when “balanced” is used as an adjective, it is defined as:

  • Keeping or showing a balance; in good proportions:
  • Taking everything into account; fairly judged or presented:
  • having different elements in the correct proportion

Screenshot 2014-07-06 17.07.23Since 1997, the term “balanced literacy” has come to mean that the parts of the phonics approach should be in good proportions with other approaches for teaching reading and writing. Pondiscio however, recasts the phrase “balanced literacy” in mythological terms, as a hydra…“a new head for whole language.” His interpretation is unsupported by definition.

Pondiscio’s wish that the “win” by phonics would eradicate whole language’s contributions to teaching literacy is overstated as some of the recommendations by the NRP could be associated with whole language:

  • Teaching vocabulary words—teaching new words, either as they appear in text, or by introducing new words separately. This type of instruction also aids reading ability.
  • Reading comprehension strategies—techniques for helping individuals to understand what they read. Such techniques involve having students summarize what they’ve read, to gain a better understanding of the material.

Beyond his use of the NRP’s 17 year-old-study, there is another problem in his choice of evidence, a quote by Susan Pimentel, one of the “principal authors of the Common Core.” Pimentel lacks the academic credentials to qualify her as an expert in literacy  (BS Early Childhood; Law Degree) in her claims that balanced literacy is “worrisome and runs counter to the letter and spirit of Common Core.” In contrast, many early literacy educators find the ELA CCSS worrisome, running counter to the spirit of new and emerging readers.

Moreover, Pimentel’s on again/off again association with the other CCSS “architects” (David Coleman and Jason Zimba) from Student Achievement Partners (SAP) was laid bare by Mercedes Schneider in a February 27, 2014, post: Schneider Dissects Sue Pimentel’s Role in Common Core Drafting; Exposes How 3 People Were Main CCSS Architects. In a blog post, Schneider documents Pimentel’s role through SAP’s tax filings and marginalizes Pimentel’s contributions with a suggestion that her inclusion on the CCSS was gender-based, “a female speaking to an audience from a profession that is primarily female, and that is good public relations for selling the CCSS product.”

Further on in Pondiscio’s op-ed, there is a reference to a NY Department of Education study on the Core Knowledge Study (2008-2012) which demonstrated, “significantly stronger gains than comparison school students on nearly all measures was for 1000 students in grades K-2 in 20 schools.” The use of this study is no surprise. Pondiscio’s promotion of this Core Knowledge program is due to the leadership of E.D. Hirsch, Jr., a Fordham Medal of Valor winner. What is missing is information on the size of the study, which involved less than 1% of K-2 student population (1.1 million total student enrollment in 2013), and its methodology in comparison to other literacy programs. Hirsch himself concurs that, “The study was too small. We need a bigger one – and one that gauges long-term as well as short-term effects.”

But what is Pondiscio most damning complaint against balanced literacy?

 “While the Common Core focuses kids’ attention on what the text says, balanced literacy often elicits a personal response to literature.” (Pondiscio)

Let me repeat his concern.

Pondiscio is distressed that a student may respond emotionally to a work of literature.

How is this a problem?

I quite am certain that a personal response in a reader is exactly what any author of literature hopes to achieve.

Reading literature is more than a decoding exercise. Reading literature at any age, especially good complex  literature, is an exercise that connects the reader and the author in an intimate bond of empathy.

Balanced literacy does require a student use evidence from a text, but the advantage to balanced literacy is that it recognizes that students cannot be silenced on what they think or feel about their reading, whether the choice of texts is theirs or not.

Pondiscio’s issue with whole language is that it emphasized reading for meaning instead of spelling, grammar, and sounding words out. In making this final part of his argument, Pondiscio reduces words to data or things devoid of meaning.

Such thinking reminds me of a line from Al Pacino’s Looking for Richard, a film study on William Shakespeare’s Richard III.

While filming on the streets of  NYC, Pacino is seen asking passers-by what is their relationship to Shakespeare. One pan handler stops long enough to explain how he feels the words in Shakespeare “instruct us”:

If we think words are things  and have no feelings in words…then we say things to each other that mean nothing.

But if we felt what we said,  we’d say less and mean more.

The pan-handler shuffles off after offering his personal explanation on words and meaning.

Pondiscio claims he wants “students to grapple with challenging texts that are worth reading,” but grappling with what the pan-handler says about the meaning of words in those texts, challenging or not,  is even more important.

Since I write to understand what I think, I have decided to focus this particular post on the different categories of assessments. My thinking has been motivated by helping teachers with ongoing education reforms that have increased demands to measure student performance in the classroom. I recently organized a survey asking teachers about a variety of assessments: formative, interim, and summative. In determining which is which, I have witnessed their assessment separation anxieties.

Therefore, I am using this “spectrum of assessment” graphic to help explain:

Screenshot 2014-06-20 14.58.50

The “bands” between formative and interim assessments and the “bands” between interim and summative blur in measuring student progress.

At one end of the grading spectrum (right) lie the high stakes summative assessments that given at the conclusion of a unit, quarter or semester. In a survey given to teachers in my school this past spring,100 % of teachers understood these assessments to be the final measure of student progress, and the list of examples was much more uniform:

  • a comprehensive test
  • a final project
  • a paper
  • a recital/performance

At the other end, lie the low-stakes formative assessments (left) that provide feedback to the teacher to inform instruction. Formative assessments are timely, allowing teachers to modify lessons as they teach. Formative assessments may not be graded, but if they are, they do not contribute many points towards a student’s GPA.

In our survey, 60 % of teachers generally understood formative assessments to be those small assessments or “checks for understanding” that let them move on through a lesson or unit. In developing a list of examples, teachers suggested a wide range of examples of formative assessments they used in their daily practice in multiple disciplines including:

  • draw a concept map
  • determining prior knowledge (K-W-L)
  • pre-test
  • student proposal of project or paper for early feedback
  • homework
  • entrance/exit slips
  • discussion/group work peer ratings
  • behavior rating with rubric
  • task completion
  • notebook checks
  • tweet a response
  • comment on a blog

But there was anxiety in trying to disaggregate the variety of formative assessments from other assessments in the multiple colored band in the middle of the grading spectrum, the area given to interim assessments. This school year, the term interim assessments is new, and its introduction has caused the most confusion with members of my faculty. In the survey, teachers were first provided a definition:

An interim assessment is a form of assessment that educators use to (1) evaluate where students are in their learning progress and (2) determine whether they are on track to performing well on future assessments, such as standardized tests or end-of-course exams. (Ed Glossary)

Yet, one teacher responding to this definition on the survey noted, “sounds an awful lot like formative.” Others added small comments in response to the question, “Interim assessments do what?”

  • Interim assessments occur at key points during the marking period.
  • Interim assessment measure when a teacher moves to the next step in the learning sequence
  • interim assessments are worth less than a summative assessment.
  • Interim assessments are given after a major concept or skill has been taught and practiced.

Many teachers also noted how interim assessments should be used to measure student progress on standards such as those in the Common Core State Standards (CCSS) or standardized tests. Since our State of Connecticut is a member of the Smarter Balanced Assessment Consortium (SBAC), nearly all teachers placed practice for this assessment clearly in the interim band.

But finding a list of generic or even discipline specific examples of other interim assessments has proved more elusive. Furthermore, many teachers questioned how many interim assessments were necessary to measure student understanding? While there are multiple formative assessments contrasted with a minimal number of summative assessments, there is little guidance on the frequency of interim assessments.  So there was no surprise when 25% of our faculty still was confused in developing the following list of examples of interim assessments:

  • content or skill based quizzes
  • mid-tests or partial tests
  • SBAC practice assessments
  • Common or benchmark assessments for the CCSS

Most teachers believed that the examples blurred on the spectrum of assessment, from formative to interim and from interim to summative. A summative assessment that went horribly wrong could be repurposed as an interim assessment or a formative assessment that was particularly successful could move up to be an interim assessment. We agreed that the outcome or the results was what determined how the assessment could be used.

Part of teacher consternation was the result of assigning category weights for each assessment so that there would be a common grading procedure using common language for all stakeholders: students, teachers, administrators, and parents. Ultimately the recommendation was to set category weights to 30% summative, 10% formative, and 60% interim in the Powerschool grade book for next year.

In organizing the discussion, and this post, I did come across several explanations on the rational or “why” for separating out interim assessments. Educator Rick DuFour emphasized how the interim assessment responds to the question, “What will we do when some of them [students] don’t learn it [content]?” He argues that the data gained from interim assessments can help a teacher prevent failure in a summative assessment given later.Screenshot 2014-06-20 16.50.15

Another helpful explanation came from a 2007 study titled “The Role of Interim Assessments in a Comprehensive Assessment System,” by the National Center for the Improvement of Educational Assessment and the Aspen Institute. This study suggested that three reasons to use interim assessments were: for instruction, for evaluation, and for prediction. They did not use a color spectrum as a graphic, but chose instead a right triangle to indicate the frequency of the interim assessment for instructing, evaluating and predicting student understanding.

I also predict that our teachers will become more comfortable with separating out the interim assessments as a means to measure student progress once they see them as part of a large continuum that can, on occasion,  be a little fuzzy. Like the bands on a color spectrum, the separation of assessments may blur, but they are all necessary to give the complete (and colorful) picture of student progress.

At the intersection of data and evaluation, here is a hypothetical scenario:Screenshot 2014-06-08 20.56.29

A young teacher meets an evaluator for a mid-year meeting.

“85 % of the students are meeting the goal of 50% or better, in fact they just scored an average of 62.5%,” the young teacher says.

“That is impressive,” the evaluator responds noting that the teacher had obviously met his goal. “Perhaps,you could also explain how the data illustrates individual student performance and not just the class average?”

“Well,” says the teacher offering a printout, “according to the (Blank) test, this student went up 741 points, and this student went up….” he continues to read from the  spreadsheet, “81points…and this student went up, um, 431 points, and…”

“So,” replies the evaluator, “these points mean what? Grade levels? Stanine? Standard score?”

“I’m not sure,” says the young teacher, looking a bit embarrassed, “I mean, I know my students have improved, they are moving up, and they are now at a 62.5% average, but…” he pauses.

“You don’t know what these points mean,” answers the evaluator, “why not?”

This teacher who tracked an upward trajectory of points was able to illustrate a trend that his students are improving, but the numbers or points his students receive are meaningless without data analysis. What doesn’t he know?

“We just were told to do the test. No one has explained anything…yet,” he admits.

There will need to be time for a great deal of explaining as the new standardized tests, Smarter Balanced Assessments (SBAC) and the Partnership for Assessment of Readiness for College and Careers (PARCC), that measure the Common Core State Standards (CCSS) are implemented over the next few years. These digital tests are part of an educational reform mandate that will require teachers at every grade level to become adept at interpreting data for use in instruction. This interpretation will require dedicated professional development at every grade level.

Understanding how to interpret data from these new standardized tests and others must be part of every teacher’s professional development plan. Understanding a test’s metrics is critical because there exists the possibility of misinterpreting results.  For example, the data in the above scenario would appear that one student (+741 points) is making enormous leaps forward while another student (+81) is lagging behind. But suppose how different the data analysis would be if the scale of measuring student performance on this particular test was organized in levels of 500 point increments. In that circumstance, one student’s improvement of +741 may not seem so impressive and a student achieving +431 may be falling short of moving up a level. Or perhaps, the data might reveal that a student’s improvement of 81 points is not minimal, because that student had already maxed out towards the top of the scale. In the drive to improve student performance, all teachers must have a clear understanding of how the results are measured, what skills are tested, and how can this information can be used to drive instruction.

Therefore, professional development must include information on the metrics for how student performance will be measured for each different test. But professional development for data analysis cannot stop at the powerpoint!   Data analysis training cannot come “canned,” especially, if the professional development is marketed by a testing company. Too often teachers are given information about testing metrics by those outside the classroom with little opportunity to see how the data can help their practice in their individual classrooms. Professional development must include the conversations and collaborations that allow teachers to share how they could use or do use data in the classroom. Such conversations and collaborations with other teachers will provide opportunities for teachers to review these test results to support or contradict data from other assessments.

Such conversations and collaborations will also allow teachers to revise lessons or units and update curriculum to address weakness exposed by data from a variety of assessments. Interpreting data must be an ongoing collective practice for teachers at every grade level; teacher competency with data will come with familiarity.

In addition, the collection of data should be on a software platform that is accessible and integrated with other school assessment programs. The collection of data must be both transparent in the collection of results and secure in protecting the privacy of each student. The benefit of technology is that digital testing platforms should be able to calculate results in a timely manner in order to free up the time teachers can have to implement changes suggested because of data analysis. Most importantly, teachers should be trained how to use this software platform.

Student data is a critical in evaluating both teacher performance and curriculum effectiveness, and teachers must be trained how to interpret rich pool of data that is coming from new standardized tests. Without the professional development steps detailed above, however, evaluation conversations in the future might sound like the response in the opening scenario:

“We just were told to do the test. No one has explained anything…yet.”

capt As the 10th grade English teacher, Linda’s role had been to prepare students for the rigors of the State of Connecticut Academic Performance Test, otherwise known as the CAPT. She had been preparing students with exam-released materials, and her collection of writing prompts stretched back to 1994.  Now that she will be retiring, it is time to clean out the classroom. English teachers are not necessarily hoarders, but there was evidence to suggest that Linda was stocked with enough class sets of short stories to ensure  students were always more than adequately prepared. Yet, she was delighted to see these particular stories go.
“Let’s de-CAPT-itate,” we laughed and piled up the cartons containing well-worn copies of short stories.
Out went Rough Touch. Out went Machine Runner. Out went Farewell to Violet, and a View from the Bridge.
I chuckled at the contents of the box labelled”depressing stories” before chucking them onto the pile.
Goodbye to Amanda and the Wounded Birds. Farewell to A Hundred Bucks of Happy. Adios to Catch the Moon. We pulled down another carton labeled  “dog stories” containing LibertyViva New JerseyThe Dog Formally Known as Victor Maximilian Bonaparte Lincoln Rothbaum. They too were discarded without a tear.
The CAPT’s Response to Literature’s chief flaw was the ludicrous diluting of Louise Rosenblatt’s Reader Response Theory where students were asked to “make a connection:”

What does the story say about people in general?  In what ways does it remind you of people you have known or experiences you have had?  You may also write about stories or other books you have read, or movies, works of art, or television programs you have seen.

That question was difficult for many of the literal readers, who, in responding to the most obvious plot point, might answer, “This story has a dog and I have a dog.” How else to explain all the dog stories? On other occasions, I found out that while taking standardized test in the elementary grades students had been told, “if you have no connection to the story, make one up!” Over the years, the CAPT turned our students into very creative liars rather than literary analysts.

 

The other flaw in the Response to Literature  was the evaluation question. Students were asked,  

How successful was the author in creating a good piece of literature?  Use examples from the story to explain your thinking.

Many of our students found this a difficult question to negotiate, particularly if they thought the author did not write a good piece of literature, but rather an average or mildly enjoyable story. They did manage to make their opinions known, and  one of my favorite student responses began, “While this story is no  Macbeth, there are a few nice metaphors…”

Most of the literature on the CAPT did come from reputable writers, but they were not the quality stories found in anthologies like Saki’s The Interlopers or Anton Chekhov’s The Bet. To be honest, I did not think the CAPT essays were an authentic activity, and I particularly did not like the selections on the CAPT’s Response to Literature section.

Now the CAPT will be replaced by the Smarter Balanced Assessments (SBAC), as Connecticut has selected SBAC as their assessment consortium to measure progress with the Common Core State Standards, and the test will move to 11th grade. This year (2014) is the pilot test only; there are no exemplars and no results.  The SBAC is digital, and in the future we will practice taking this test on our devices, so there is no need to hang onto class sets of short stories. So why am I concerned that there will be no real difference with the SBAC? Cleaning the classroom may be a transition that is more symbolic of our move from paper to keyboard than in our gaining an authentic assessment.

Nevertheless, Linda’s classroom looked several tons lighter.

“We are finally de-CAPT-itated!” I announced looking at the stack of boxes ready for the dumpster.

“Just in time to be SBAC-kled!” Linda responded cheerfully.

An ad supporting the Common Core State Standards posted by the Bill and Melinda Gates Foundation featured a Missouri Teacher of the Year, Jamie Manker, saying, “I support the Common Core because it’s asking kids to think.”

Screen Shot 2014-05-03 at 7.31.27 AM

My immediate reaction was, “Good Heavens! What did Manker’s students do before the implementation of the Common Core? Thinking should have been happening all along!”

Of course her students had been thinking or she would not have been a teacher of the year. Her statement may have been truncated to fit on on the #SupporttheCore poster. Yet, she is not alone in making such statements. There have been a number of teachers of the year who state that their students are doing better work because of the Common Core:

From Nancie Lindblom Arizona 2013 Teacher of the Year, The new standards provide the opportunity to do this by increasing the expectations for all students, allowing me to challenge my students to think analytically.”

From Ms. Sponaugle 2014 West Virginia Teacher of the Year, “My students are engaged, they’re motivated, and they’re learning, and that’s what the common core standards are all about-preparing our children to be confident and capable in an ever-more competitive world.”

Again, these admissions are puzzling. Why would a teacher whose credentials and instructional practice are exemplary enough to warrant a state award wait for an “opportunity” to challenge students to think analytically? Or how would a teacher of the year not already be engaging students in order to prepare them for an ever-more competitive world? Did they not already use a set of standards before the Common Core in their classrooms?

Without context, these teachers’ statements make them appear less competent. In an ironic twist, the Bill and Melinda Gates Foundation’s use of teachers of the year as promotional tools has the unfortunate effect of leaving them open to the following line of criticism: What kind of teachers were they B.C.C.(Before the Common Core) when they admit their students were not being challenged?

Their overstatements on behalf of the Common Core contribute to the unfortunate generalization that B.C.C.(Before the Common Core) students were not engaged. They were not being prepared for a competitive world. They did not think.

Collectively, their statements open up a single tricky question for these teachers of the year…..Why not?