Bags ready? Set to find great bargains? Go to Newtown, Connecticut, for the Friends of the C.H.Booth Library where over 100,000 books, records, DVDs go on sale annually. Their book sale always marks for me the beginning of the book sale season. This year’s starting date was July 12, 2014.

For the first time, I went on the admission day ($5) and used extra help (husband & son) to follow me with bags. Even then, I was too late to get the 20 or so copies of The Great Gatsby I saw someone packing up at the check out counter. My son noted that I also missed out on copies of of The Hunger Games Trilogy selections.
“The woman was only four feet away from you when I saw her stuffing them in her bag,” he claimed, “but I wasn’t going to tackle her.”

Fortunately, thanks to the diligent efforts of what looked like a small army of volunteer Friends of the Library, the tables were well organized by genre and author. I was able to get multiple copies of the 12th grade summer reading book, A Walk in the Woods.. In addition, I filled bags with the required summer reading for Advanced Placement English Literature including:
Little Bee, A Thousand Splendid Suns, and
Bel Canto. I also found copies for the grade 10 world literature library including The Places in Between, The Life of Pi, The Curious Incident of the Dog in the Nighttime, and A Long Way Gone.
There were also books to add to classroom libraries for independent reading including Dairy Queen, Elsewhere, and a pile of books from the Rick Riordan’s Percy Jackson series.

20140713-113152.jpg

Counting Books at the check-out with the friendly volunteer

20140713-113202.jpg

Five bags of books for classroom libraries for $229.00; a bargain!

The book sale at Newtown is a model of efficiency. There is room to move between tables, the books are properly sorted by genre ( for the most part) and the volunteer help is cheerful and efficient.

“You must be using these in a school?” suggested the woman checking us out as she counted out 20 copies of The Help.
“Actually,” my son replied feigning seriousness, “we really like this book….we’re going to read every single copy.”
“Oh,” she started, and then smiled,”you’re terrible…”

What is not terrible is that I spent $229 for over 80 books; some of them core texts and some for independent reading.
The summer book sale season helps me put books in the hands of readers. The Newtown Friends of the Library book sale does that extremely well.

12 graders during SSR

Our 12 graders during independent reading- SSR

How challenging is it for a teacher to run an independent reading program? Very challenging. That is the only thing thing that Newsweek reporter Alexander Nazaryan got right in his NYTimes op-ed piece The Fallacy of ‘Balanced Literacy’ (7/6/14).

His lack of success in having students choose their own reading for pleasure over the course of one school year, should not grant him the opportunity to decry the practice. His own failure to encourage students to engage in reading for pleasure should not dissuade other teachers from encouraging students to develop life-long reading habits. Had he the proper training and resources in balanced literacy, he would have witnessed how the challenge of implementing independent reading in a classroom can be met at any grade level and is a critical step to making students life-long readers.

If he had the training, he would recognize that teachers who are familiar with books for specific age groups and levels of interest can make reading recommendations to students or help facilitate highly successful peer to peer book recommendations. If he had the resources of high interest, low-level texts in jam packed classroom libraries for his students, he would have increased the level of engagement. If he had utilized the time for reading to individually confer briefly with students about their reading while other students read quietly, he would have established a classroom routine that would allow him to informally measure student growth as they read. Finally, if he had impressed upon students the importance of reading for pleasure, he would have helped their academic success in all other classes.

Research studies (compiled by the American Library Association) have determined that reading outside of the classroom is the best predictor for student success:

The amount of free reading done outside of school has consistently been found to relate to achievement in vocabulary, reading comprehension, verbal fluency, and general information. Students’ reading achievement correlates with success in school and the amount of independent reading they do (Anderson, Wilson, and Fielding 1988; Guthrie and Greaney 1991; Krashen 1993; Cunningham and Stanovich 1991; Stanovich and Cunningham 1993).

This research from the ALA is borne out by testing through The National Assessment of Educational Progress (NAEP) test, which has monitored the academic performance of 9-, 13-, and 17-year-old students  since the 1970s. Long-term trend assessments in reading are measured on a scale of 500 points. In taking the NAEP, students volunteered information on their reading habits. The results from this data in 2012 demonstrated that the average score for the 22% of those students aged 13 who never (or hardly ever) read independently was 25 points lower than students who read every day. By age 17, the difference had increased to 30 points.

Screenshot 2014-06-11 20.37.34

NAEP scores for 13 year olds who read for pleasure and the increase in standardized test scores

This data confirms what we have witnessed in our own classrooms. Our students are given SSR (silent sustained reading) time in class for independent reading in grades 7-12. Independent reading for our school means that students get to choose what they would like to read without having to take a quiz or a test on the book. The only “requirements” are that students keep a running record (we are using Shelfari) of their independent reading books. We ask them to share their recommendations with their peers. We talk to them about what they read.

Screenshot 2014-06-12 07.18.45

Holding up cards with the numbers of book read independently (included one world lit choice book)

Sometimes students are offered a choice from a length list of thematically connected books, and sometimes the choice must be in a particular genre (non-fiction, memoir, world literature). Other times, the choice is entirely open and students can read whatever books they want. Our block schedule allows us the luxury of offering students 15-20 minutes each period. A quick estimate means that over the course of the school year (40 weeks), meeting twice weekly (roughly 30 minutes minimum a week), students will be offered a minimum of 20 hours of reading time in class. They make very good use of that time.

read1

Holding up the number of books read in grade 9

The main goal of our independent reading program is to encourage students to read beyond the walls of the classroom; our 15 minutes spent in class is intended as a “hook” to connect students with books that they might want to read or as a “refresher” to reconnect a book already being read.

Seniors holding up the number of books read independently in a semester

Seniors holding up the number of books read independently in a semester

Encouraging students to read independently means practice, and the time we provide in class contributes to that reading practice. At the end of this year, we are celebrating the number of books read over the course of the year by taking group photos of students proudly holding up the number of books they have read independently over the past school year. So, rather than read a confessed failure in an op-ed piece that incorrectly characterizes independent reading written by someone who has left education, take a look at how the challenge of independent reading is being successfully met in our classrooms. The proof is in the pictures.

Throwbacks in education are common.

This time, Robert Pondiscio, a Senior Fellow and Vice President for External Affairs at the Thomas B. Fordham Institution is itching for a fight to reopen old “reading war” wounds. He has taken umbrage with the NYTimes (7/2/14) opinion piece Balanced Literacy Is One Effective Approach by Lucy Calkins: Director of the Teachers College Reading and Writing Project at Columbia University and a proponent of balanced literacy.

Pondiscio’s op-ed (7/3/2014) titled, Why Johnny Won’t Learn to Read charges back into the heat of that fight as he referenced the 1997 National Reading Panel’s review of studies on the teaching of reading.

In reminding everyone that “phonics won,” Pondiscio jettisons the definition of the word “balanced” in the phrase balanced literacy. The Oxford Online Dictionary states that when “balanced” is used as an adjective, it is defined as:

  • Keeping or showing a balance; in good proportions:
  • Taking everything into account; fairly judged or presented:
  • having different elements in the correct proportion

Screenshot 2014-07-06 17.07.23Since 1997, the term “balanced literacy” has come to mean that the parts of the phonics approach should be in good proportions with other approaches for teaching reading and writing. Pondiscio however, recasts the phrase “balanced literacy” in mythological terms, as a hydra…“a new head for whole language.” His interpretation is unsupported by definition.

Pondiscio’s wish that the “win” by phonics would eradicate whole language’s contributions to teaching literacy is overstated as some of the recommendations by the NRP could be associated with whole language:

  • Teaching vocabulary words—teaching new words, either as they appear in text, or by introducing new words separately. This type of instruction also aids reading ability.
  • Reading comprehension strategies—techniques for helping individuals to understand what they read. Such techniques involve having students summarize what they’ve read, to gain a better understanding of the material.

Beyond his use of the NRP’s 17 year-old-study, there is another problem in his choice of evidence, a quote by Susan Pimentel, one of the “principal authors of the Common Core.” Pimentel lacks the academic credentials to qualify her as an expert in literacy  (BS Early Childhood; Law Degree) in her claims that balanced literacy is “worrisome and runs counter to the letter and spirit of Common Core.” In contrast, many early literacy educators find the ELA CCSS worrisome, running counter to the spirit of new and emerging readers.

Moreover, Pimentel’s on again/off again association with the other CCSS “architects” (David Coleman and Jason Zimba) from Student Achievement Partners (SAP) was laid bare by Mercedes Schneider in a February 27, 2014, post: Schneider Dissects Sue Pimentel’s Role in Common Core Drafting; Exposes How 3 People Were Main CCSS Architects. In a blog post, Schneider documents Pimentel’s role through SAP’s tax filings and marginalizes Pimentel’s contributions with a suggestion that her inclusion on the CCSS was gender-based, “a female speaking to an audience from a profession that is primarily female, and that is good public relations for selling the CCSS product.”

Further on in Pondiscio’s op-ed, there is a reference to a NY Department of Education study on the Core Knowledge Study (2008-2012) which demonstrated, “significantly stronger gains than comparison school students on nearly all measures was for 1000 students in grades K-2 in 20 schools.” The use of this study is no surprise. Pondiscio’s promotion of this Core Knowledge program is due to the leadership of E.D. Hirsch, Jr., a Fordham Medal of Valor winner. What is missing is information on the size of the study, which involved less than 1% of K-2 student population (1.1 million total student enrollment in 2013), and its methodology in comparison to other literacy programs. Hirsch himself concurs that, “The study was too small. We need a bigger one – and one that gauges long-term as well as short-term effects.”

But what is Pondiscio most damning complaint against balanced literacy?

 “While the Common Core focuses kids’ attention on what the text says, balanced literacy often elicits a personal response to literature.” (Pondiscio)

Let me repeat his concern.

Pondiscio is distressed that a student may respond emotionally to a work of literature.

How is this a problem?

I quite am certain that a personal response in a reader is exactly what any author of literature hopes to achieve.

Reading literature is more than a decoding exercise. Reading literature at any age, especially good complex  literature, is an exercise that connects the reader and the author in an intimate bond of empathy.

Balanced literacy does require a student use evidence from a text, but the advantage to balanced literacy is that it recognizes that students cannot be silenced on what they think or feel about their reading, whether the choice of texts is theirs or not.

Pondiscio’s issue with whole language is that it emphasized reading for meaning instead of spelling, grammar, and sounding words out. In making this final part of his argument, Pondiscio reduces words to data or things devoid of meaning.

Such thinking reminds me of a line from Al Pacino’s Looking for Richard, a film study on William Shakespeare’s Richard III.

While filming on the streets of  NYC, Pacino is seen asking passers-by what is their relationship to Shakespeare. One pan handler stops long enough to explain how he feels the words in Shakespeare “instruct us”:

If we think words are things  and have no feelings in words…then we say things to each other that mean nothing.

But if we felt what we said,  we’d say less and mean more.

The pan-handler shuffles off after offering his personal explanation on words and meaning.

Pondiscio claims he wants “students to grapple with challenging texts that are worth reading,” but grappling with what the pan-handler says about the meaning of words in those texts, challenging or not,  is even more important.

I teach students how to write. I do not make them writers.

There is a difference.

I have taught the writing process for over twenty years. I have taught students at different grade levels how to write for a specific audience in a specific format for a particular purpose. For example:

  • Write a letter to your principal asking for an extra 15 minutes of recess. (persuasive letter)
  • Research Shakespeare’s use of biblical imagery in Hamlet. (literary analysis)
  • Imagine you are a citizen of the ancient city state of Sparta. What would a typical day be like? (narrative)writing

I know how to teach students to incorporate evidence in their writing. I have lessons on how to find the best evidence, and I have lessons on how to use a “stem sentence” to incorporate their evidence. I have lessons on how students should cite their evidence.

I can teach students how to use order in establishing a position in an argument, how to expand their ideas in analysis, and how to use sequence in telling a story.

I can teach students how to use a “formula” approach if they get stuck by having them:

  • Start with a question, a quote, a definition, or example;
  • Write a thesis with three points and then develop each of these points into paragraphs;
  • Restate their best idea in the conclusion.

I teach the writing process: draft, edit, review, revise, (repeat), polish, and publish.

After all these lessons, I am confident that my students can write better.

I am not sure they are writers.

This past week, I went to hear the writer Dani Shapiro (Still Writing, Devotion: A Memoir) talk about her creative process as a writer. I thought I might hear some new ideas or inspiration that could help me teach my students to become writers.

Ms. Shapiro was composed as she ruined any notion that I could offer my students more than I already did in class. She was gracious as she crushed my hopes for easy solutions. I scrambled taking notes, but fortunately, what she said that night is posted on her blog:

Are there steps that lead the writer to the page? Steps that we can take, teetering one after the next, that will somehow get us into that longed-for state of the page rising up, the world receding?

I’m sorry to say that after all my musing I was unable to come up with a game plan, for myself or anyone else. Honestly, I never really thought I would, because every writer’s path to the page is unique and fraught in its own special way.

As she spoke, the issues I had with Michael, a student I had in class this year, came to mind.

All year, Michael was compelled to write, but not the writing I required. He would hang around after class asking me to “quick read” a story.  (Note: they were very dark short stories). After an assignment, he would ask me what was my favorite part of an essay he had handed in. Before I could speak, he would read aloud his favorite line from that essay.

He took umbrage when I made a critical comment. He could not write on demand. He dawdled with all sorts of technology while others scratched out a timed essay. He hated turning in his incomplete work complaining “I didn’t get to say what I wanted” or “I just couldn’t get started.”

After class, I would correct the essays. Michael’s papers could begin like any other student’s paper. Pronoun antecedent issues. Capitalization problems. Missing apostrophes. I would write the usual blunt comments,”Get to the point!” in the margins. But I learned to look for that sentence, usually somewhere about 2/3 through his essay, for that sentence….and I would have to stop.

Everything Michael wrote before that sentence in an essay was in need of revision, but everything after that sentence in the essay was different, shaded…altered. He could write something that silenced the teacher voice in my head.

“I knew that was good,” he would say looking for my approval.
“Yes,” I would agree, “that was very good. I have no suggestions.”
That would please him, until the next writing assignment he would be forced to write.

As Shapiro states, there are no prescribed steps I can devise to “lead the writer to the page.” She could not help me develop a game plan to get my students “into that longed-for state of the page rising up, the world receding,” just as there was no game plan that made Michael a writer. I know he is on a unique path, and I know I did not teach him this path.

His path illustrates the difference, a difference I recognize between my teaching writing and my teaching a writer.

Since I write to understand what I think, I have decided to focus this particular post on the different categories of assessments. My thinking has been motivated by helping teachers with ongoing education reforms that have increased demands to measure student performance in the classroom. I recently organized a survey asking teachers about a variety of assessments: formative, interim, and summative. In determining which is which, I have witnessed their assessment separation anxieties.

Therefore, I am using this “spectrum of assessment” graphic to help explain:

Screenshot 2014-06-20 14.58.50

The “bands” between formative and interim assessments and the “bands” between interim and summative blur in measuring student progress.

At one end of the grading spectrum (right) lie the high stakes summative assessments that given at the conclusion of a unit, quarter or semester. In a survey given to teachers in my school this past spring,100 % of teachers understood these assessments to be the final measure of student progress, and the list of examples was much more uniform:

  • a comprehensive test
  • a final project
  • a paper
  • a recital/performance

At the other end, lie the low-stakes formative assessments (left) that provide feedback to the teacher to inform instruction. Formative assessments are timely, allowing teachers to modify lessons as they teach. Formative assessments may not be graded, but if they are, they do not contribute many points towards a student’s GPA.

In our survey, 60 % of teachers generally understood formative assessments to be those small assessments or “checks for understanding” that let them move on through a lesson or unit. In developing a list of examples, teachers suggested a wide range of examples of formative assessments they used in their daily practice in multiple disciplines including:

  • draw a concept map
  • determining prior knowledge (K-W-L)
  • pre-test
  • student proposal of project or paper for early feedback
  • homework
  • entrance/exit slips
  • discussion/group work peer ratings
  • behavior rating with rubric
  • task completion
  • notebook checks
  • tweet a response
  • comment on a blog

But there was anxiety in trying to disaggregate the variety of formative assessments from other assessments in the multiple colored band in the middle of the grading spectrum, the area given to interim assessments. This school year, the term interim assessments is new, and its introduction has caused the most confusion with members of my faculty. In the survey, teachers were first provided a definition:

An interim assessment is a form of assessment that educators use to (1) evaluate where students are in their learning progress and (2) determine whether they are on track to performing well on future assessments, such as standardized tests or end-of-course exams. (Ed Glossary)

Yet, one teacher responding to this definition on the survey noted, “sounds an awful lot like formative.” Others added small comments in response to the question, “Interim assessments do what?”

  • Interim assessments occur at key points during the marking period.
  • Interim assessment measure when a teacher moves to the next step in the learning sequence
  • interim assessments are worth less than a summative assessment.
  • Interim assessments are given after a major concept or skill has been taught and practiced.

Many teachers also noted how interim assessments should be used to measure student progress on standards such as those in the Common Core State Standards (CCSS) or standardized tests. Since our State of Connecticut is a member of the Smarter Balanced Assessment Consortium (SBAC), nearly all teachers placed practice for this assessment clearly in the interim band.

But finding a list of generic or even discipline specific examples of other interim assessments has proved more elusive. Furthermore, many teachers questioned how many interim assessments were necessary to measure student understanding? While there are multiple formative assessments contrasted with a minimal number of summative assessments, there is little guidance on the frequency of interim assessments.  So there was no surprise when 25% of our faculty still was confused in developing the following list of examples of interim assessments:

  • content or skill based quizzes
  • mid-tests or partial tests
  • SBAC practice assessments
  • Common or benchmark assessments for the CCSS

Most teachers believed that the examples blurred on the spectrum of assessment, from formative to interim and from interim to summative. A summative assessment that went horribly wrong could be repurposed as an interim assessment or a formative assessment that was particularly successful could move up to be an interim assessment. We agreed that the outcome or the results was what determined how the assessment could be used.

Part of teacher consternation was the result of assigning category weights for each assessment so that there would be a common grading procedure using common language for all stakeholders: students, teachers, administrators, and parents. Ultimately the recommendation was to set category weights to 30% summative, 10% formative, and 60% interim in the Powerschool grade book for next year.

In organizing the discussion, and this post, I did come across several explanations on the rational or “why” for separating out interim assessments. Educator Rick DuFour emphasized how the interim assessment responds to the question, “What will we do when some of them [students] don’t learn it [content]?” He argues that the data gained from interim assessments can help a teacher prevent failure in a summative assessment given later.Screenshot 2014-06-20 16.50.15

Another helpful explanation came from a 2007 study titled “The Role of Interim Assessments in a Comprehensive Assessment System,” by the National Center for the Improvement of Educational Assessment and the Aspen Institute. This study suggested that three reasons to use interim assessments were: for instruction, for evaluation, and for prediction. They did not use a color spectrum as a graphic, but chose instead a right triangle to indicate the frequency of the interim assessment for instructing, evaluating and predicting student understanding.

I also predict that our teachers will become more comfortable with separating out the interim assessments as a means to measure student progress once they see them as part of a large continuum that can, on occasion,  be a little fuzzy. Like the bands on a color spectrum, the separation of assessments may blur, but they are all necessary to give the complete (and colorful) picture of student progress.

Yesterday, there was one paperback copy of The Hunger Games squeezed in-between other trade fiction. Two hardcover copies of Mockingjay were together on an opposite shelf. These books from The Hunger Games Trilogy by Suzanne Collins had been donated to a local Goodwill store. When I found them tucked away on the store’s shelves, I knew that the series had met a tipping point: still popular but not popular enough to treasure and keep.photo (19)

The Hunger Games series (2008-2010) has been to today’s graduating seniors what the Harry Potter series (1997-2007) was to today’s 28 year olds…a collective reading experience. The series developed a dedicated young adult following, and the most obvious signs of their dedication was the carting of hardcover editions because each reader could not wait for the book to go to paperback.

Once The Hunger Games series caught fire (literally), book conversations centered on Katniss. There were speculations on her choice of Gale or Peeta. Predictions on the fate District 13 were rampant. The publication of each new book in the series was a major event; students shared copies from period to period. When the first film, The Hunger Games, came out students critiqued every detail that was present and noticed every detail that was missing.

Our Reading/English/Language Arts teachers loved having students read these books as well. The series laid the connections to more traditional texts such as the Greek myths or Romeo and Juliet. There were plenty of connections on current events in the economy and media that could be made as well.

Finding three copies in the used book shelves now, however, signals a sputtering of interest. Students will still pick up the used copies from the book carts in the classroom, but the rabid fans have moved on.  Collins has helped this year’s graduating seniors develop their independent reading skills, the kind of skills that will serve them well in the future.

There are benefits to the recycling of books. I spent $7.98 on the three copies that would have been $27.66 if purchased new. The consequences of reaching a tipping point in popularity is a benefit for classroom libraries, which means finding used books from this series will be easier now that …. “the odds be ever in our favor.”

It’s official.

The chocolate milk debate  as a test writing prompt is dead in Connecticut to all grade levels.choclate-milk

Yes, that old stalwart, “Should there be chocolate milk in schools?” offered to students as a standardized writing prompt was made null and void with one stroke from Governor Malloy’s pen. According to Hartford Courant, (6/12/14) Malloy Veto Keeps Chocolate Milk On School Lunch Menus,

“to the vast relief of school kids, nutritionists, milk producers and lawmakers, Gov. Dannel P. Malloy used his veto power Thursday to kill a bill that would have banned chocolate milk sales in Connecticut schools.” 

Apparently, the same nutritional charts, editorials, and endorsements from dairy groups organized in packets and given to students from grades 3-11 to teach how to incorporate evidence in a fake persuasive argument under testing conditions was convincing enough to have real CT residents make a persuasive argument for legislators. To show his solidarity with the people, Governor Malloy quaffed down a container of chocolate milk before vetoing a bill that would have banned the sale of chocolate milk in schools.

Standardly, the writing prompt is addressed in English/Language Arts (ELA) class in elementary schools, but in middle and high schools, a persuasive essay is often the responsibility of the social studies teacher. The assumption here is that the skill of persuasion requires research and the incorporation of evidence, both taught in social studies classes. In contrast, ELA classes are dedicated to the analysis of literature through essays using a range of skills: identifying author’s craft, identifying author’s purpose, editing, and revising. The responsibilities for the writing portion of an exam are divided between the ELA classes for the literary analysis essay and the social studies classes for the persuasive essay. This design is intended to promote an interdisciplinary effort, but it is an intellectually dishonest division of labor.

ELA teachers have choices to prepare students for standardized tests using ELA content (literature and grammar) to improve skills. Math and science teachers are also tied to their disciplines’ content in order for their students to be prepared.  Social studies is the only core discipline with the test-prompt disconnect.

So, what topics might test creators design to replace the infamous chocolate milk debate prompt? Before test creators start manufacturing new and silly debates, there is a window of opportunity where attention could be brought to this disconnect between content and testing in writing. Here is the moment where social studies teachers should point out to test creators the topics from their curriculum that could be developed into writing prompts. Here is a foot in the door for the National Council for the Social Studies to introduce writing prompts that complement their content. For example, there could be prompts about Egyptian culture, prompts on the American Revolution, or prompts about trade routes and river based communities. Too often, social studies teachers must devote class time to topics unrelated to curriculum.

The Smarter Balanced Assessment Field Test given this past spring (2014) to 11th graders was about the use of social media by journalists. When they took the test, I overheard the following exchange:

“Of course they use social media,” grumbled one student, “who is going to stop them?”
“Do they think they are ‘cool’ because they mentioned Twitter?” countered another.

Previous standardized test writing prompts (in Connecticut, the CMT and CAPT) for high school and middle school have been devoted to asking students to write persuasively on the age students should be able to drive; whether wolves should be allowed in Yellowstone National Park or not; whether to permit the random drug testing of high school students; and whether there should be uniforms required in schools.

Please notice that none of these aforementioned prompts are directly related to the content in any social studies curricula. Furthermore, the sources prepared as a database for students to use as evidence in responding to these are packets with newspaper opinion columns or polls, and statistical charts; there is no serious research required.

Here is the moment when social studies teachers and curriculum leaders need to point out how academically dishonest the writing prompt is on a standardized test as a measure of their instruction in their discipline. No longer should the content of social studies be abandoned for inauthentic debate.

The glass in Connecticut is half-full now that students can have chocolate milk in schools. Time for test creators to empty out the silly writing prompts that have maddened social studies teachers for years.

Time to choose content over chocolate.

 

Screenshot 2014-06-10 21.50.37So… who appointed Ruth Graham of Slate Magazine to the book police patrol? In a recent piece Titled Against YA (6/5/14) Graham lays out an argument that adults can,

“Read whatever you want. But you should feel embarrassed when what you’re reading was written for children.”

Why? The best books tell good stories, and good stories are what we share as humans. Good stories are found in picture books; good stories are found in children’s chapter books. Good stories are found in folk tales, fairy tales, and fables. Good stories are not exclusive to one age group or another.  Why the need to separate required serious or not serious book choice by a reader’s birthdate?

The problem Graham is fabricating is that readers will become “stunted” on steady diets of YA (young adult) literature. Her snobbish references to Alice Munro (who I love) and John Updike (who I do not love) as those ” authors whose work has only become richer to me as I have grown older” is self-aggrandizing. There are plenty of “serious” award-winning authors who make me “roll my eyes,” but I would not withhold a text from someone who wants to read it. A good story is ageless, a good story is timeless.

So, Ruth Graham, I say readers should be able to read anything. Readers will learn, like you said, what “authors have to say about love, relationships, sex, trauma, happiness, and all the rest—you know, life—from the reading they choose to do,” regardless as to whether the book bears the stigma of YA, or maybe, because that book does.

Continue Reading…

At the intersection of data and evaluation, here is a hypothetical scenario:Screenshot 2014-06-08 20.56.29

A young teacher meets an evaluator for a mid-year meeting.

“85 % of the students are meeting the goal of 50% or better, in fact they just scored an average of 62.5%,” the young teacher says.

“That is impressive,” the evaluator responds noting that the teacher had obviously met his goal. “Perhaps,you could also explain how the data illustrates individual student performance and not just the class average?”

“Well,” says the teacher offering a printout, “according to the (Blank) test, this student went up 741 points, and this student went up….” he continues to read from the  spreadsheet, “81points…and this student went up, um, 431 points, and…”

“So,” replies the evaluator, “these points mean what? Grade levels? Stanine? Standard score?”

“I’m not sure,” says the young teacher, looking a bit embarrassed, “I mean, I know my students have improved, they are moving up, and they are now at a 62.5% average, but…” he pauses.

“You don’t know what these points mean,” answers the evaluator, “why not?”

This teacher who tracked an upward trajectory of points was able to illustrate a trend that his students are improving, but the numbers or points his students receive are meaningless without data analysis. What doesn’t he know?

“We just were told to do the test. No one has explained anything…yet,” he admits.

There will need to be time for a great deal of explaining as the new standardized tests, Smarter Balanced Assessments (SBAC) and the Partnership for Assessment of Readiness for College and Careers (PARCC), that measure the Common Core State Standards (CCSS) are implemented over the next few years. These digital tests are part of an educational reform mandate that will require teachers at every grade level to become adept at interpreting data for use in instruction. This interpretation will require dedicated professional development at every grade level.

Understanding how to interpret data from these new standardized tests and others must be part of every teacher’s professional development plan. Understanding a test’s metrics is critical because there exists the possibility of misinterpreting results.  For example, the data in the above scenario would appear that one student (+741 points) is making enormous leaps forward while another student (+81) is lagging behind. But suppose how different the data analysis would be if the scale of measuring student performance on this particular test was organized in levels of 500 point increments. In that circumstance, one student’s improvement of +741 may not seem so impressive and a student achieving +431 may be falling short of moving up a level. Or perhaps, the data might reveal that a student’s improvement of 81 points is not minimal, because that student had already maxed out towards the top of the scale. In the drive to improve student performance, all teachers must have a clear understanding of how the results are measured, what skills are tested, and how can this information can be used to drive instruction.

Therefore, professional development must include information on the metrics for how student performance will be measured for each different test. But professional development for data analysis cannot stop at the powerpoint!   Data analysis training cannot come “canned,” especially, if the professional development is marketed by a testing company. Too often teachers are given information about testing metrics by those outside the classroom with little opportunity to see how the data can help their practice in their individual classrooms. Professional development must include the conversations and collaborations that allow teachers to share how they could use or do use data in the classroom. Such conversations and collaborations with other teachers will provide opportunities for teachers to review these test results to support or contradict data from other assessments.

Such conversations and collaborations will also allow teachers to revise lessons or units and update curriculum to address weakness exposed by data from a variety of assessments. Interpreting data must be an ongoing collective practice for teachers at every grade level; teacher competency with data will come with familiarity.

In addition, the collection of data should be on a software platform that is accessible and integrated with other school assessment programs. The collection of data must be both transparent in the collection of results and secure in protecting the privacy of each student. The benefit of technology is that digital testing platforms should be able to calculate results in a timely manner in order to free up the time teachers can have to implement changes suggested because of data analysis. Most importantly, teachers should be trained how to use this software platform.

Student data is a critical in evaluating both teacher performance and curriculum effectiveness, and teachers must be trained how to interpret rich pool of data that is coming from new standardized tests. Without the professional development steps detailed above, however, evaluation conversations in the future might sound like the response in the opening scenario:

“We just were told to do the test. No one has explained anything…yet.”

Screenshot 2014-05-31 14.25.03As the school year comes to a close, the buzzphrase is “student growth.” All stakeholders in education want to be able to demonstrate student growth, especially if student growth could be on an upwards trajectory like the graph at left.

Last week I had an opportunity to consider student growth with a different lens, and that lens was provided by a graduating senior who was preparing a presentation to a group of 7th & 8th graders.
I had assigned Steven and his classmates the task of developing  TED-like-Talks that they would give to the middle schoolers. The theme of these talks was “The Most Important Lesson I Learned in 13 Years of Education.” The talk was required  to be short (3-5 minutes), to incorporate graphics, and to make a connection between what was learned and the outside world. I asked students to come up some “profound” idea that made the lesson the most important lesson in their academic career. I gave them several periods to pitch ideas and practice.

Steven’s practice presentation was four slides long on the lesson “Phase Changes of Water.” There was a graphic on each slide that illustrated the changes of water from solid ice to liquid to vapor. The last slide illustrated the temperatures at which water underwent a change and the amount of heat energy or calories expended to make that phase change (below):

phaseplot

“What you see in this graph,” Steven explained, “is that there is a stage, a critical point, where the amount of energy needs to increase to have water change from solid to liquid. The graph shows that stage of changing from solid to liquid is shorter than the stage where the amount of energy needs to increase to change water into steam.”
He pointed to the lines on the graph, first the shorter line labeled melting and then longer line labeled vaporizing.
“So how is this a profound idea?” he asked. “Well, this chart is just like anything you might want to improve on. Sometimes you are working to go to the next level, but you hit a plateau, a critical point. You need to expend more energy for a longer period of time to get to that next level. Thank you.”

We clapped. Everyone sitting in class agreed that Steven had met the assignment. He met the time limit. He had graphics. He made a connection.
I saw something even more profound.

In less than three minutes, Steven had used what he had learned in physics to teach me a new way to consider the learning process. I could see phase changes or phase transitions to illustrate the relationship between energy expended over time and academic performance. I could relabel the side marked heat energy to a label of “energy expended over time.”  Some phase changes would be short, as in the change from ice to a liquid state. Other phase changes would be longer, as in the change from liquid to gas. Each line of phase change would be different.

For example, if I applied this idea to teaching  English grammar, some student phase changes would be short, as in a student’s use of pronouns to represent a noun. Other phase changes could be much longer, such as that same student employing noun-pronoun agreement. Time and energy would need to be expended to improve individual student performance on this task.

But whose energy is measured in this re-imagined transition? Perhaps the idea of phase changes could be used to explain how a teacher’s energy expended in instruction over time, or during a critical point, could improve academic performance. The same idea could be used to demonstrate how a student must expend additional energy at a critical point to improve understanding in order to advance to the next level.

At the end of the school year, teachers need to provide evidence of individual student growth, but perhaps a student is in a transitioning phase and growth is not yet evident?  The major variable in measuring student achievement is the length of the critical point of transition from one level to another, and that length of that critical point could extend for the length of a school year or maybe even longer. Growth may not be measured in the time provided and more energy may need to be expended.

What was so interesting to me was how Steven’s use of phase changes had given me another lens to view the students I assess and the teachers I evaluate. Because measuring academic progress is not fixed by the same physical laws where 540 calories are needed to turn 1 gram (at 100 degrees Celsius) of water to steam, each student’s graph of academic achievement (phase changes) varies. Critical points will be at different levels of achievement measured by different lengths of energy expended. Despite the wishes of teachers, administrators, and students themselves, “growth” is rarely on that 45º trajectory. Instead, growth is represented by moving up a series of stages or critical points that illustrate the amount of energy, by student and/or teacher, spent over time.

Energy matters, in physics and in student achievement. Steven’s TEDTalk gave me a new way to think about that. He was profound. I think he gets an A.