Archives For Assessment

Opening speeches generally start with a “Welcome.”
Lucy Calkins started the 86th Saturday Reunion, March 22, 2014, at Teacher’s College with a conjunction.

“And this is the important thing” she addressed the crowd that was filling up the rows in the Riverside Cathedral, “the number of people who are attending has grown exponentially. This day is only possible with the goodwill of all.”

Grabbing the podium with both hands, and without waiting for the noise to die down, Calkins launched the day as if she was completing a thought she had from the last Saturday Reunion.

“We simply do not have the capacity to sign you up for workshops and check you in. We all have to be part of the solution.”

She was referring to the  workshops offered free of charge to educators by all Teachers College Reading and Writing Project (TCRWP) staff developers at Columbia University. This particular Saturday, there were over 125 workshops advertised on topic such as “argument writing, embedding historical fiction in nonfiction text sets, opinion writing for very young writers, managing workshop instruction, aligning instruction to the CCSS, using performance assessments and curriculum maps to ratchet up the level of teaching, state-of-the-art test prep, phonics, and guided reading.”

“First of all, ” she chided, “We cannot risk someone getting hit by a car.” Calkin’s concerns are an indication that the Saturday Reunion workshop program is a victim of its own success. The thousands of teachers disembarking from busses, cars, and taxis were directed by TCRWP minions to walk on sidewalks, wait at crosswalks, and “follow the balloons” to the Horace Mann building or Zankel Hall.

“Cross carefully,” she scolded in her teacher voice, “and be careful going into the sessions,” she continued, “the entrances to the larger workshops are the center doors, the exits are to the sides. We can’t have 800 people going in and out the same way.”

Safety talk over, Calkins turned her considerable energy to introducing a new collaborative venture, a website where educators can record their first hand experiences with the Common Core State Standards and Smarter Balanced Assessments (SBAC) or the Partnership for Assessment of Readiness for College and Careers (PARCC) testing.

And, as unbelievable as this sounds, Calkins admitted that, sometimes, “I get afraid to talk out.”
That is why, she explained, she has joined an all-star cast of educators (including Diane Ravitch, Kylene Beers, Grant Wiggins, Robert Marzano, Anthony Cody, Kathy Collins, Jay McTighe, David Pearson, Harvey “Smokey” Daniels and others-see below) in organizing a website where the voices of educators with first hand experience with standardized testing can document their experiences. The site is called Testing Talkhttp://testingtalk.org/) The site’s message on the home page states:

This site provides a space for you to share your observations of the new breed of standardized tests. What works? What doesn’t? Whether your district is piloting PARCC, Smarter Balanced, or its own test, we want to pass the microphone to you, the people closest to the students being tested. The world needs to hear your stories, insights, and suggestions. Our goal is collective accountability and responsiveness through a national, online conversation.

Screenshot 2014-03-31 21.56.01 Calkin’s promotion was directed to educators, “This will be a site for you to record your experience with testing, not to rant.” She noted that as schools “are spending billions, all feedback on testing should be open and transparent.” 

Winding down Calkins looked up from her notes. “You will all be engaged,” she promised. “Enter comments; sign your name,” she urged before closing with the final admonishment, “Be brave.”

Continue Reading…

Screenshot 2014-03-21 21.09.03Our school has been preparing for an accreditation by the New England Association of Schools & Colleges, Inc. (NEASC), and that means two things:

Housecleaning and housekeeping.

The housecleaning is the easy part. A great deal of time and effort has been spent on making the school look nice to the accreditation team. Considering that our campus is in the bucolic Litchfield Hills of Connecticut, we had a great start. Our building is extremely well maintained, and our maintenance staff has been recognized for their “green” maintenance policies. The final details to housecleaning were the addition of student art on the walls and a large canvas featuring the student designed logo that centers on the motto “Quality, Academics, Pride.”

Preparing the housekeeping was different. Housekeeping required that all stakeholders in our school community reflect on how well do we keep our “house”-our school- running. There have been meetings for the past two years: meetings with community members,  meetings with students, meetings with teachers across disciplines. There have been committees to research eight topics:

  • Core Values, Beliefs, and Learning Expectations
  • Curriculum
  • Instruction
  • Assessment of and for Student Learning
  • School Culture and Leadership
  • School Resources for Learning
  • Community Resources for Learning

After all the meetings came the writing of the reports, and after all the reports came the gathering of the evidence. Finally, the evidence sits in eight bins in a room in the agricultural wing of the school ready for the volunteer accreditation team to review.

What is most striking about the collected evidence is the variety. The evidence today contrasts to the evidence from the accreditation several years ago.  For each lesson plan on paper, there is a digital lesson plan. For each student essay drafted, peer-reviewed, and handwritten on composition paper, there is a Google Doc with peer comments, and to see each draft, one need only check “see revision history.” Whether or not members of the NEASC committee check the revision histories of individual documents is not as important as how they will check the history we have provided in the evidence bins and websites. In looking at the evidence, the NEASC committee will note our academic housekeeping, and they will make recommendations as to how we should proceed in the future.

The entire school community has every right to be proud of Wamogo Regional High School, and recommendations from NEASC will help guide us in the future. But for tonight, the housecleaning and housekeeping is over.  

A message from the Vice Principal arrived by e-mail tonight; she sums up the experience:

When driving home from school this evening, I was thinking about the arduous process we have all been engaged in over the past two years.  I don’t believe there is a single member of our school community that hasn’t played a part in this important preparation.  Many of you worked tirelessly on committees, writing reports, culling evidence, hanging student work, etc., etc., etc.  I just wanted to take a moment and  thank the entire Wamogo community for the rally we have all engaged in to prepare for this important visit.  I know that the visiting school will easily see what a special place Wamogo is and the obvious talents of our staff and students.  I am extremely proud of our school and want you to enjoy showing the visiting committee what wonderful work you are doing with our students.

Welcome, NEASC. Our house is ready.

March Madness is not exclusive to basketball.Screen Shot 2014-03-15 at 1.38.50 PM
March Madness signals the season for standardized testing season here in Connecticut.
March Madness signals the tip-off for testing in 23 other states as well.

All CT school districts were offered the opportunity to choose the soon-to-be-phased-out pen and paper grades 3-8 Connecticut Mastery Tests (CMT)/ grade 10 Connecticut Academic Performance Test (CAPT) OR to choose the new set of computer adaptive Smarter Balanced Tests developed by the federally funded Smarter Balanced Assessment Consortium (SBAC). Regardless of choice, testing would begin in March 2014,

As an incentive, the SBAC offered the 2014 field test as a “practice only”, a means to develop/calibrate future tests to be given in 2015, when the results will be recorded and shared with students and educators. Districts weighed their choices based on technology requirements, and many chose the SBAC field test. But for high school juniors who had completed the pen and paper CAPT in 2013, this is practice; they will receive no feedback. This 2014 SBAC field test will not count.

Unfortunately, the same can not be said for counting the 8.5 hours of testing in English/language arts and mathematics that had to be taken from 2014 academic classes. The elimination of 510 minutes of instructional time is complicated by scheduling students into computer labs with hardware that meets testing  specifications. For example, rotating students alphabetically through these labs means that academic classes scheduled during the testing windows may see students A-L one day, students M-Z on another. Additional complications arise for mixed grade classrooms or schools with block schedules. Teachers must be prepared with partial lessons or repeating lessons during the two week testing period; some teachers may miss seeing students for extended periods of time. Scheduling madness.

For years, the state standardized test was given to grade 10, sophomore students. In Connecticut, the results were never timely enough to deliver instruction to address areas of weakness during 10th grade, but they did help inform general areas of weakness in curriculum in mathematics, English/language arts, and science. Students who had not passed the CAPT had two more years to pass this graduation requirement; two more years of education were available to address specific student weaknesses.

In contrast, the SBAC is designed to given to 11th graders, the junior class. Never mind that these junior year students are preparing to sit for the SAT or ACT, national standardized tests. Never mind that many of these same juniors have opted to take Advanced Placement courses with testing dates scheduled for the first two full weeks of May. On Twitter feeds, AP teachers from New England to the mid-Atlantic are already complaining about the number of delays and school days already lost to winter weather (for us 5) and the scheduled week of spring break (for us, the third week of April) that comes right before testing for these AP college credit exams. There is content to be covered, and teachers are voicing concerns about losing classroom seat time. Madness.

Preparing students to be college and career ready through the elimination of instructional time teachers use to prepare students for college required standardized testing (SAT, ACT) is puzzling, but the taking of instructional time so students can take state mandated standardized tests that claim to measure preparedness for college and career is an exercise in circular logic. Junior students are experiencing an educational Catch 22, they are practicing for a test they will never take, a field test that does not count. More madness.

In addition, juniors who failed the CT CAPT in grade 10 will still practice with the field test in 2014. Their CAPT graduation requirement, however, cannot be met with this test, and they must still take an alternative assessment to meet district standards. Furthermore, from 2015 on, students who do not pass SBAC will not have two years to meet a state graduation requirement; their window to meet the graduation standard is limited to their senior year. Even more madness.

Now, on the eve of the inaugural testing season, a tweet from SBAC itself (3/14):

Screen Shot 2014-03-15 at 1.28.22 PM

This tweet was followed by word from CT Department of Education Commissioner Stefan Pryor’s office sent out on to superintendents from Dianna Roberge-Wentzell, DRW, that the state test will be delayed a week:

Schools that anticipated administering the Field Test during the first week of testing window 1 (March 18 – March 24) will need to adjust their schedule. It is possible that these schools might be able to reschedule the testing days to fall within the remainder of the first testing window or extend testing into the first week of window 2 (April 7 – April 11).

Education Week blogger Stephen Sawchuk provides more details in his post  Smarter Balanced Group Delays in the explanation for the delay:

The delay isn’t about the test’s content, officials said: It’s about ensuring that all the important elements, including the software and accessibility features (such as read-aloud assistance for certain students with disabilities) are working together seamlessly.

“There’s a huge amount of quality checking you want to do to make sure that things go well, and that when students sit down, the test is ready for them, and if they have any special supports, that they’re loaded in and ready to go,” Jacqueline King, a spokeswoman for Smarter Balanced, said in a March 14 interview. “We’re well on our way through that, but we decided yesterday that we needed a few more days to make sure we had absolutely done all that we could before students start to take the field tests.”

A few more days is what teachers who carefully planned alternative lesson plans during the first week of the field test probably want in order to revise their lessons. The notice that districts “might be able to reschedule” in the CT memo is not helpful for a smooth delivery of curriculum, especially since school schedules are not developed empty time slots available to accommodate “willy-nilly testing” windows. There are field trips, author visits, assemblies that are scattered throughout the year, sometimes organized years in advance. Cancellation of activities can be at best disappointing, at worst costly. Increasing madness.

Added to all this madness, is a growing “opt-out” movement for the field test. District administrators are trying to address this concern from the parents on one front and the growing concerns of educators who are wrestling with an increasingly fluid schedule. According to Sarah Darer Littman on her blog Connecticut News Junkie, the Bethel school district offered the following in a letter parents of Bethel High School students received in February:

“Unless we are able to field test students, we will not know what assessment items and performance tasks work well and what must be changed in the future development of the test . . . Therefore, every child’s participation is critical.

For actively participating in both portions of the field test (mathematics/English language arts), students will receive 10 hours of community service and they will be eligible for exemption from their final exam in English and/or Math if they receive a B average (83) or higher in that class during Semester Two.”

Field testing as community service? Madness. Littman goes on to point out that research shows that a student’s GPA is a better indicator of college success than an SAT score and suggests an exemption raises questions about a district’s value on standardized testing over student GPA, their own internal measurement. That statement may cause even more madness, of an entirely different sort.

Connecticut is not the only state to be impacted by the delay. SBAC states include: California, Delaware,  Hawaii, Idaho, Iowa, Maine, Michigan, Missouri, Montana, Nevada, New Hampshire, North Carolina, North Dakota, Oregon, Pennsylvania, South Carolina, South Dakota, U.S. Virgin Islands, Vermont, Washington, West Virginia, Wisconsin, Wyoming.

In the past, Connecticut has been called “The Land of Steady Habits,” “The Constitution State,” “The Nutmeg State.” With SBAC, we could claim that we are now a “A State of Madness,” except for the 23 other states that might want the same moniker. Maybe we should compete for the title? A kind of Education Bracketology just in time for March Madness.

Open House: OMG!

September 15, 2013 — 1 Comment

September is Open House Month, and the welcoming speech from a teacher could sound like this:

“Welcome, Parents! Let me show you how to access my website on the SMARTboard where you can see how the CCSS are aligned with our curriculum. You can monitor your child’s AYP by accessing our SIS system, Powerschool. In addition, all of our assignments are on the class wiki that you can access 24/7.  As we are a BYOD school, your child will need a digital device with a 7″ screen to use in class.”

OMG!

How parents may feel during Open House listening to education acronyms

The result of such a speech is that parents may feel like students all over again. The same people who sat in desks, perhaps only a few years ago, now are on another side of the classroom experience, and the rapid changes caused by the use of technology in education necessitate a need for education primer, a list of important terms to know. While attending the Open House, parents can observe that there are still bulletin boards showcasing student work. They can note how small the desks appear now, if there are desks. Perhaps the lunch lady is the same individual who doled out applesauce and tater tots onto their school lunch trays.  Yet, listening to how instruction is delivered, monitored, and accessed may make parents feel that they are in some alien experience with instructors and administrators spouting a foreign language. Just what is a wiki? they may wonder, and what does BYOD stand for?

So, let’s begin with some of the acronyms.  At Open House, educators may casually throw around some of the following terms to explain what they teach or how they measure what they teach:

  • PBL (Project Based Learning) a hands-on lesson;
  • SIS (Student Information System);
  • Bloom’s Taxonomy: a sequence of learning based on complication of task and level of critical thinking which is being replaced by the DOK;
  • DOK (Depths of Knowledge) complication of task and level of critical thinking required
  • ESL (English as a Second Language);
  • AYP (Adequate Yearly Progress);
  • WIKI: a web application which allows people to add, modify, or delete content in a collaboration with others; and
  • SMARTboard: interactive white board

Subject area names may also seem unfamiliar since they now reflect a different focus on areas in education. English is now ELA (English/Language Arts) while science and math have merged like the Transformers into the mighty STEM (Science, Technology, Engineering, and Math). The old PE class may now bear the moniker Physical Activity and Health (PAH), but  History has already dealt with the shift to the more inclusive term Social Studies coined in the 1970s.

Assessment (testing) brings about another page in the list of education acronyms that parents may hear on Open House, including these few examples:

DRP (Degrees of Reading Power) reading engagement, oral reading fluency, and comprehension younger elementary students;
DRA (Developmental Reading Assessment) reading engagement, oral reading fluency, and comprehension in elementary and middle grade students;
STAR: new skills-based test items, and new in-depth reports for screening, instructional planning, progress monitoring;
PSAT/SAT/ACT:designed to assess student academic readiness for college 

Parents, however, should be aware that they are not alone in their confusion. Educators often deal with acronym duplication, and  state by state the abbreviations may change. In Connecticut, some students have IEPs (Individual Education Plans), but all students have SSP (Student Success Profiles) which shares the same acronym with the SSP (Strategic School Profile). Connecticut introduced the teacher evaluation program SEED known as the System for Educator Evaluation and Development, which is an acronym not to be confused with SEED, a partnership with urban communities to provide educational opportunities that prepare underserved students for success in college and career.

Federal programs only add to the list of abbreviations. Since 1975, students have been taught while IDEA (Individuals with Disabilities Education Act) has been implemented. NCLB (No Child Left Behind) has been the dominating force in education for the length of the Class of 14’s time in school, along with its partner SSA (Student Success Act) which is similar to, but not exactly like, the SSP mentioned earlier. The latest initiative to enter the list of reform movements that parents should know  is known as the CCSS the Common Core State Standards.

The CCSS are academic standards developed in 2009 and adopted by 45 states in order to provide “a consistent, clear understanding of what students are expected to learn, so teachers and parents know what they need to do to help them.” Many of the concepts in the CCSS will be familiar to parents, however, the grade level at which they are introduced may be a surprise. Just as their parents may have been surprised to find the periodic tables in their 5th grade science textbooks, there are many concepts in math (algebra) and English (schema) that are being introduced as early as Kindergarten.

So when a student leaves in the morning with a digital device for school, BYOD or BYOT (Bring Your Own Technology) and sends a “text” that they will be staying late for extra help or extra-curricular activities, parents should embrace the enhanced communication that this Brave New World of technology in education is using. If at Open House a parent needs a quick explanation of the terms being used by a teacher, he should raise his hand;  in spite of all these newfangled terms and devices, that action still signals a question.

Above all, parents should get to know the most important people in the building: the school secretary (sorry, the Office Coordinator) and the school custodian (sorry, FMP: Facility Maintenance Personnel). They know where your child left her backpack.

I recently had to write a position statement on assessment and evaluation.  The timing of this assignment, June 2013, coincided with the release of the National Assessment of Educational Progress (NAEP) Progress Report for 2012. This “Nation’s Report Card” provides an overview on the progress made by specific age groups in public and private schools in reading and in mathematics since the early 1970s.

Since NAEP uses the results of standardized tests, and those standardized tests use multiple choice questions, here is my multiple choice question for consideration:

Based on the 2012 NAEP Report results, what difference(s) in reading scores separates a 17-year-old high school student in 1971 from a 17-year-old high school student in 2012?

a. 41 years
b. billions in dollars spent in training, teaching, and testing
c. a 2 % overall difference in growth in reading
d. all of the above

You could act on your most skeptical instincts about the costs and ineffectiveness of standardized testing and make a calculated guess from the title of this blog post or you could skim the 57 page report (replete with charts, graphs, graphics, etc) that does not take long to read, so you could get the information quickly to answer correctly: choice “D”.

Yes, 41 years later, a 17-year old scores only 2% higher than a previous generation that probably contained his or her parents.

There have been billions of dollars invested in developing reading skills for our nation’s children. In just the last twelve years, there has been the federal effort in the form of Reading First, the literacy component of President Bush’s 2001 “No Child Left Behind” Act. Reading First initially offered over $6 billion to fund scientifically based reading-improvement efforts in five key early reading skills: phonemic awareness, phonics, fluency, vocabulary, and comprehension. The funding of grants for students enrolled in kindergarten through grade three in Title I Schools began in 2002-2003.

There have been individual state initiatives that complement Reading First, funded by state legislatures, such as:

There have been efforts to improve literacy made by non-profit educational corporations/foundations such as The Children’s Literacy Initiative, the National Reading Panel, and a Born to Read initiative from the American Library Association. In addition, there have been a host of policy statements from The National Council of Teachers of English and programs offered by the National Writing Project that have helped to drive attention towards the importance of reading.

All of these initiatives drove publishers of educational materials to create programs, materials and resources for educators to use. Unfortunately, the question of which reading program would prove most effective (Direct Instruction, Reading Recovery, Success for All and others) became a tangled controversy as charges of conflicts of interest between the consultants who had been hired by the Department of Education (DOE) and who trained teachers and state department of education personnel had also authored reading programs for curriculum. Fuel to this controversy was added when a review in 2006 by the DOE’s Inspector General suggested that the personnel in the DOE had frequently tried to dictate which curriculum schools must use with Reading First grant money.

Trying to improve our our students’ reading scores has been the focus so much so that our education systems have been awash in funding, materials, initiatives and controversies since 2001 in our collective to improve reading for students…and the result?

The result is a measly 2% of growth in reading for those leaving our school systems.

The evidence for this statement has been tracked by NAEP, an organization that has been assessing the progress of  9-, 13-, and 17-year-olds in reading. The graphs below taken from the NAEP report measure annual growth at each age level at the high level 250, mid level 200, and low level 150 of reading.  There are other levels measured for highest or lowest achieving students, but the levels measured on the graphs levels are correlated to the following descriptions:

LEVEL 250: Interrelate Ideas and Make Generalizations
Readers at this level use intermediate skills and strategies to search for, locate, and organize the information they find in relatively lengthy passages and can recognize paraphrases of what they have read. They can also make inferences and reach generalizations about main ideas and the author’s purpose from passages dealing with literature, science, and social studies. Performance at this level suggests the ability to search for specific information, interrelate ideas, and make generalizations.

LEVEL 200: Demonstrate Partially Developed Skills and Understanding
Readers at this level can locate and identify facts from simple informational paragraphs, stories, and news articles. In addition, they can combine ideas and make inferences based on short, uncomplicated passages. Performance at this level suggests the ability to understand specific or sequentially related information.

LEVEL 150: Carry Out Simple, Discrete Reading Tasks
Readers at this level can follow brief written directions. They can also select words, phrases, 9 or sentences to describe a simple picture and can interpret simple written clues to identify a common object. Performance at this level suggests the ability to carry out simple, discrete reading tasks.

Screen Shot 2013-06-29 at 7.52.04 PM

The NAEP report does offer some positive developments. For example, from 1971-2012, reading scores for 9-year-olds have seen an increase of 5% in students reading at the lower (150) level, an increase of 15% for students reading at mid-range (200), and an increase of 6% for students reading at the higher (250) level.

Screen Shot 2013-06-29 at 7.52.16 PMSimilarly, reading scores for 13-year olds have increased 8% for students reading at mid-level, and 5% for students at the higher level. Scores for student reading at the lower level, however, saw a negligible increase of only 1%.

At this point, I should note that the NAEP report does contain some positive finding. For example, the measurements indicate that the gaps for racial/ethnic groups did narrow in reading over the past 41 years. According to the report:

Even though White students continued to score 21 or more points higher on average than Black and Hispanic students in 2012, the White – Black and White – Hispanic gaps narrowed in comparison to the gaps in the 1970s at all three ages. The White – Black score gaps for 9- and 17-year-olds in 2012 were nearly half the size of the gaps in 1971.

Unfortunately, even that positive information should be considered with the understanding that most of these gains for racial and ethnic groups were accomplished before 2004.

Finally, for students leaving public and private school systems, the overall news is depressing. Any gains in reading in ages 9 and 13, were flattened by age 17. The growth for students reading at higher level dropped from 7% to 6%, while the  percentage of mid-range readers remained the same at 39%. The gains of 3% were in the scores of lower range readers, from 79% to 82%. Considering the loss of 1% at the higher end, the overall growth in measurement is that measly 2%.

Screen Shot 2013-06-29 at 7.55.37 PM

That’s it. A financial comparison would be a  yield $.02 for every dollar we have invested. Another comparison is that for every 100 students, only two have demonstrated improvement after 13 years of education.

Assessing the last 12 of the 41 years of measuring reading initiatives illustrates that there has been no real progress in reading as measured by standardized tests in our public and private education institutions grades K-12. NAEP’s recounting of the results after considerable funding, legislation, and effort, is as Shakespeare said, “a tale…full of sound and fury, signifying nothing.”

Continue Reading…

sunThe paradox of summer reading:  Read=pleasure or Read=work.

All students should read at least one book this summer and practice the independent reading skills they have used the whole school year. They should receive credit for reading over the summer, but to give credit means an assessment. An assessment comes dangerously close to committing Readicide,(n): The systematic killing of the love of reading, often exacerbated by the inane, mind-numbing practices found in schools.

Anecdotally, 50% of students will read for fun. The other 50% will skim or Sparknote to complete an assignment, or they will not read at all for a variety of reasons: “it’s boring”, “too much work”, “I hate to read.”  Many students avoid books creating a “reading-free zone” from June through August. In addition, there are some parents who openly complain that assignments over the summer interfere with family vacation plans.

But there are many parents who understand the importance of reading. They could be frustrated all summer as they responsibly hound their children to do their summer assignments rather than wait until the last minute.

Summer reading is fun for some, but summer reading is a hassle for others. Why bother, indeed?

Well, research clearly demonstrates that summer reading is important in maintaining reading skills at every grade level. A meta-analysis (1996) of 39 separate studies about the effects of summer on student learning came to the conclusion that summer reading was critical to stopping the “summer slide”. Without summer reading, there could be a loss equaling about one month on each grade-level equivalent scale. Students would be playing a cognitive “catch-up” through November each school year.

In “The Effects of Summer Vacation on Achievement Test Scores: A Narrative and Meta-Analytic Review” by H. Cooper, B. Nye, K. Charlton, J. Lindsay and S. Greathouse, there were several key findings:

 At best, students showed little or no academic growth over the summer. At worst, students lost one to three months of learning.
 Summer learning loss was somewhat greater in math than reading.
 Summer learning loss was greatest in math computation and spelling.
 For disadvantaged students, reading scores were disproportionately affected and the achievement gap between rich and poor widened.

There have been studies since 1996 that confirm the findings of the meta-analysis, so, summer reading cannot be optional if students are to maintain their skills and progress as readers. The problem for teachers is how to engage the 50% who will not read over the summer. My English Department has tried the following:

  • One summer, we tried an assigned book route. We used a multiple choice quiz to measure student comprehension. The results were average to below average. Most students hated having to read an assigned book.
  • One summer, we tried the dialectical journal kept by a student on either an independent book choice or an assigned book (see post). The results were mixed with 25% students not completing the journal or completing the journal so poorly that we were chasing students for work past the due date and well into the end of the first quarter.
  • One summer, we tried the “project of your choice” in response to a “book of your choice”, but then we were buried in a pile of projects, with a wide variable in the quality of these projects.

So, this summer (2013) we are again trying something different in the hopes of finding a better measurement for summer reading. We are giving students their choice in reading fiction or non-fiction. The incoming 7th and 8th graders choose a book for the summer, and the school will provide that book. Students who will be entering grades 9 -11, may checkout a book from an extensive list organized by our school media specialist or any other book they choose.

Summer reading will be assessed with a writing assignment when all students return in September. The questions will align with standardized test essay questions (CAPT, SAT) and students may have the book in hand or notes from the book; students who read early in the summer will have the same advantages as students who read later in the summer, or the night before the writing prompt:

Essay question(s) for a work of FICTION read over the summer:
How does the main character change from the beginning of the story to the end? What do you think causes the change?
How did the plot develop and why?
How did the main character change? What words or actions showed this change?

Essay question(s) for a work of NON-FICTION read over the summer:
If this book was intended to teach the reader something, did it succeeded? Was something learned from reading this book, if so what? If not, why did the book fail as a teaching tool?Was there a specific passage that had left an impression, good or bad? Share the passage and its effect on the reader.

This assessment will be given the second week in September, and while there is a concern that writing is not as effective in measuring a student’s reading comprehension, at minimum this assessment will give the English Department members a chance to teach a writing prompt response.

Students who are in honors level or Advanced Placement courses will still have required reading. For example, incoming 9th grade honor students will read The Alchemist and The Book Thief while 12th grade Advanced Placement English Literature Students will be given the choice to read three of the following five titles: Bel Canto, The Story of Edgar Sawtelle, The Poisonwood Bible, Little Bee, or A Thousand Splendid Suns.

Before students leave for the summer, we plan on putting books into as many hands as possible. We will encourage students to organize themselves with book buddies, a suggestion from a post by Christopher Lehman, having them organize who they will be reading alongside, someone who they could talk with about their reading. The students have Shelfari accounts and can communicate online during the summer. We will promote our own reading book sites and include an audiobook site SYNC that pairs a young adult novel with a classic each week during the summer. For example, August 1 – 7, 2013 will feature Death Cloud by Andrew Lane, read by Dan Weyman (Macmillan Audio) with The Adventures of Sherlock Holmes by Arthur Conan Doyle, read by Ralph Cosham (Blackstone Audio). We will post information about summer reading on our websites, and send out Remind 101 notices.

While the research clearly demonstrates that summer reading is important, how students accomplish summer reading assignments during vacation time is a paradox. Should we assess reading for pleasure, or should students be left on their own and possibly lose reading skills?  Quiz them in September or lose them to the summer slide? No right answer, but good evidence to continue the tradition of summer reading.

If I had a choice of vanity license plates, I might consider one that marked my recent experience as a volunteer on an educational accreditation team.

NEASC PlateEducational accreditation is the “quality assurance process during which services and operations of schools are evaluated by an external body to determine if applicable standards are met.”

I served as a volunteer on a panel for the New England Association of Schools and Colleges (NEASC), an agency that provides accreditation services  – Pre-K through university for more than 2000 public and private institutions in the six state region.  NEASC  Panels are composed of experienced chairpersons and volunteer teachers, administrators, and support staff who visit schools according to a set schedule. According to its website:

In preparation for a NEASC evaluation, all member schools must undertake an exhaustive self-study involving the participation of faculty, administrators, staff, students, community members, and board members.

The key word here? Exhaustive.

Exhaustive in preparation for a NEASC visit. Exhaustive in being hosting a NEASC visit. Exhaustive in being a member of the NEASC team that visits.

But first, a little background. In order to serve as a volunteer, I had to leave several lessons on Hamlet, my favorite unit, with my substitute. So, when I understood the level of professional discretion required for a NEASC visit, I felt a curious connection to the Ghost, Hamlet’s father, who likewise abides by an oath.  On the ramparts of Elsinore, he tells Hamlet:

But that I am forbid
To tell the secrets of my prison-house
I could a tale unfold whose lightest word
Would harrow up thy soul, freeze thy young blood,(1.5.749-752)

I may not say what school I visited nor may I discuss any part of the actual accreditation discussion by members of my team. So this post will speak only as a self reflection of the process and a few moments of recognition on how accreditation works.

List, list, O, list! (1.5.758)

Sunday morning at 9:30 AM, the team members were already hard at work organizing piles of documents prepared for our visit. We were organized into pairs, two members to work on each of the seven standards, 14 members of the team and two chairpeople.

There was a working lunch before the entire team went to the school for a prepared presentation. This presentation was the high school’s opportunity to quickly familiarize us with their school’s culture and present their strengths and needs that they had determined in the (exhaustive) self study.

Madam, how like you this play?(3.2.222)

Returning to our hotel, the lodgings provided by our hosting school, the work began in earnest. We looked through bins of student work to see if they met the standards set by NEASC.  We looked at all forms of assessments, lesson plans, and student responses. We recorded our findings well into the night, and finally left the work room at 10 PM.

…to sleep;/To sleep: perchance to dream (3.1.65-66)

On both Monday and Tuesday, the team was up early to return to the school (7:00 AM), and the team split up individually or in groups to spend a school day conducting interviews with faculty, staff, and students. Facility tours, lunches shared with students in the cafeteria, and opportunities to “pop-into” classes were available. There simply was no “unobligated time” as we worked steadily in the work room at the school. Here we would record our findings before returning to the school hallways.

Were you not sent for? Is it
your own inclining? Is it a free visitation? Come,
deal justly (2.2.275-276)

Both Monday and Tuesday evening sessions were long as team members furiously documented their findings into a report that will still need editing and revision.  We had worked from 6AM-10:30PM with time allotted for meals and one hour respite in order to call home or check on my own school’s e-mail.  Closing my eyes, I thought how much,

My spirits grow dull, and fain I would beguile
The tedious day with sleep. (3.2.226-227)

An early Wednesday morning work session let us polish the report and present our final conclusions to other members of the team. Finally, the votes as to whether the team would recommend accreditation or not to the school were tallied, and we marched into the school library to meet the faculty and staff a final time. We were leaving a report for them to:

suit the action to the word, the word
to the action; (3.2.17-18)

The chair gave a short speech indicating the tone but not the contents of our report, and then, according to protocol, we left as team, not speaking to anyone from the school, nor to each other. Staying silent, I thought

Farewell, and let your haste commend your duty. (1.2.39)

The experience provided me with insights into the strengths and weaknesses in the educational program of my own school, and I am eager to share ways that can improve instruction with my fellow faculty members. Our school is scheduled for a visit in the spring of 2014 by a NEASC accreditation team.

As professional development, the experience was positive but physically demanding and intellectually challenging. The chairs’ use of technology (Google docs, Livebinders, Linot) allowed for efficient sharing of information on seven standards: Core Values and Beliefs, Curriculum, Instruction, Assessment, School Culture and Leadership, School Resources, and Community Resources. Awash in papers and digital materials for 16 hours a day, I wondered how any previous teams using only hard copies had collaborated successfully.

Additionally, as I looked at the various standards of instruction, I also found myself wondering about the consequences of implementing Common Core Standards (CCSS) and the growing reliance on standardized testing in evaluating teachers and assessing student understanding. Will the current form of regional accreditation adjust to measurements that will be implemented nationally? The United States is broken into five regional accreditation districts, however, if students meet the national standards, how will these regional accreditation panels be used?

Finally, our four day “snap shot” coupled with a the school’s own exhaustive self-study could not address all of the arbitrary elements out of a school’s control, but the process is far more informative and meaningful than any standardized test results that could be offered by the CCSS. Consider also that the financing of a school seriously impacts, for good or for ill, all standards of measuring a school’s success. The intangible “culture” surrounding a school and the fluid landscape of 21st Century’s technology are other arbitrary factors that impact all standards. We even encountered a “snow-delayed” opening as if to remind us that a capricious Mother Nature refuses to allow for standardized measurement!

I only hope that my experience in informing another school in order to improve their educational program will prove beneficial. I know that when the team comes in the spring of 2014, that that they will do as I have tried to do:

 report me and my cause aright…(5.2.339)

The rest I now need requires silence.

This post completes a trilogy of reflections on the Connecticut Academic Performance Test (CAPT) which will be terminated once the new Smarter Balance Assessments tied to the Common Core State Standards (CCSS) are implemented. There will be at least one more year of the same CAPT assessments, specifically the Interdisciplinary Writing Prompt (IW) where 10th grade students write a persusive essay in response to news articles. While the horribly misnamed Response to Literature (RTL) prompt confuses students as to how to truthfully evaluate an story and drives students into “making stories up” in order to respond to a question, the IW shallowly addresses persuasive writing with prompts that have little academic value.

According to the CAPT Handbook (3rd Generation) on the CT State Department of Eduction’s website, the IW uses authentic nonfiction texts that have been:

“… published and are informational and persuasive, 700-1,000 words each in length, and at a 10th-grade reading level.  The texts represent varied content areas (e.g., newspaper, magazine, and online articles, journals, speeches, reports, summaries, interviews, memos, letters, reviews, government documents, workplace and consumer materials, and editorials).  The texts support both the pro and con side of the introduced issue.  Every effort is made to ensure the nonfiction texts are contemporary, multicultural, engaging, appropriate for statewide implementation, and void of any stereotyping or bias.  Each text may include corresponding maps, charts, graphs, and tables.”

Rather than teach this assessment in English, interdisciplinary writing is taught in social studies because the subject of social studies is already interdisciplinary. The big tent of social studies includes elements of economics, biography, law, statistics, theology, philosophy, geography, sociology, psychology, anthropology, political science and, of course, history. Generally, 9th and 10 grade students study the Ancient World through Modern European World (through WWII) in social studies. Some schools may offer civics in grade 10.

Social studies teachers always struggle to capture the breadth of history, usually Western Civilization, in two years. However, for 15 months before the CAPT, social studies teachers must also prepare students to write for the IW test. But does the IW reflect any of the content rich material in social studies class? No, the IW does not. Instead the IW prompt is developed on some “student centered” contemporary issue. For example, past prompts have included:

  • Should students be able to purchase chocolate milk in school?
  • Should utility companies construct wind farms in locations where windmills may impact scenery or wildlife?
  • Should ATVs be allowed in Yellowstone Park?
  • Should the school day start later?
  • Should an athlete who commits a crime be allowed to participate on a sports team?
  • Should there be random drug testing of high school students?

On the English section of the test, there are responses dealing with theme, character and plot. On the science section, the life, physical and earth sciences are woven together in a scientific inquiry. On the math section, numeracy is tested in problem-solving. In contrast to these disciplines, the social studies section, the IW, has little or nothing to do with the subject content. Students only need to write persuasively on ANY topic:

For each test, a student must respond to one task, composed of a contemporary issue with two sources representing pro/con perspectives on the issue.  The task requires a student to take a position on the issue, either pro or con.  A student must support his or her position with information from both sources.  A student, for example, may be asked to draft a letter to his or her congressperson, prepare an editorial for a newspaper, or attempt to persuade a particular audience to adopt a particular position.  The task assesses a student’s ability to respond to five assessed dimensions in relationship to the nonfiction text: (1) take a clear position on the issue, (2) support the position with accurate and relevant information from the source materials, (3) use information from all of the source materials, (4) organize ideas logically and effectively, and (5) express ideas in one’s own words with clarity and fluency.

The “authentic” portions of this test are the news articles, but the released materials illustrate that these news articles are never completely one-sided; if they are written well, they already include a counter-position.  Therefore, students are regurgitating already highly filtered arguments. Secondly, the student responses never find their way into the hands of the legislators or newspaper editors, so the responses are not authentic in their delivery. Finally, because these prompts have little to do with social studies, valuable time that could be used to improve student content knowledge of history is being lost.  Some teachers use historical content to practice writing skills, but there is always instructional time used to practice with released exam materials.

Why are students asked to argue about the length of a school day when, if presented with enough information, they could argue a position that reflects what they are learning in social studies? If they are provided the same kinds of newspaper, magazine, and online articles, journals, speeches, reports, summaries, interviews, memos, letters, reviews, government documents, workplace and consumer materials, and editorials, could students write persuasive essays with social studies content that is measurable? Most certainly. Students could argue whether they would support a government like Athens or a government like Sparta. Students could be provided brief biographies and statements of belief for different philosophers to argue who they would prefer as a teacher, DesCartes or Hegel. Students could write persuasively about which amendment of the United States Constitution they believe needs to be revisited, Amendment 10 (State’s Rights) or Amendment 27 (Limiting Changes to Congressional Pay).

How unfortunate that such forgettable issues as chocolate milk or ATVs are considered worthy of determining a student’s ability to write persuasively. How inauthentic to encourage students to write to a legislator or editor and then do nothing with the students’ opinions. How depressing to know that the time and opportunity to teach and to measure a student’s understanding of the rich content of social studies is lost every year with IW test preparation.

coffeetalkMaybe the writers of the CAPT IW prompt should have taken a lesson from the writers of Saturday Night Live with the Coffee Talk with Michael Myers. In these sketches, Myers played Linda Richmond, host of the call-in talk show “Coffee Talk”. When s(he) would become too emotional (or feclempt or pheklempt ) to talk, s(he) would “give a topic” to talk “amoungst yourselves”.  Holding back tears, waving red nails in front of his face furiously, Myers would gasp out one of the following:

“The Holy Roman Empire was neither holy, Roman, nor an empire….Discuss…”

“Franklin Delano Roosevelt’s New Deal was neither new nor a deal…. Discuss…”

“The radical reconstruction of the South was neither radical nor a reconstruction…. Discuss…”

“The internal combustion engine was neither internal nor a combustion engine…. Discuss…”

If a comedy show can come up with these academic topics for laughs, why can’t students answer them for real? At least they would understand what made the sketches funny, and that understanding would be authentic.

Screen Shot 2013-03-10 at 11.08.07 AMMarch in Connecticut brings two unpleasant realities: high winds and the state standardized tests. Specifically, the Connecticut Academic Performance Tests (CAPT) given to Grade 10th are in the subjects of math, social studies, sciences and English.

There are two tests in the English section of the CAPT to demonstrate student proficiency in reading. In one, students are given a published story of 2,000-3,000 words in length at a 10th-grade reading level. They have 70 minutes to read the story and draft four essay responses.

What is being tested is the student’s ability to comprehend, analyze, synthesize, and evaluate. While these goals are properly aligned to Bloom’s taxonomy, the entire enterprise smacks of intellectual dishonesty when “Response to Literature” is the title of this section of the test.

Literature is defined online as:

“imaginative or creative writing, especially of recognized artistic value: or writings in prose or verse; especially writings having excellence of form or expression and expressing ideas of permanent or universal interest.”

What the students read on the test is not literature. What they read is a story.

A story is defined as:

“an account of imaginary or real people and events told for entertainment.”

While the distinction may seem small at first, the students have a very difficult time responding to the last of the four questions asked in the test:

How successful was the author in creating a good piece of literature? Use examples from the story to explain your thinking.

The problem is that the students want to be honest.

When we practice writing responses to this question, we use the released test materials from previous years: “Amanda and the Wounded Birds”, “A Hundred Bucks of Happy”, “Machine Runner” or “Playing for Berlinsky”.  When the students write their responses, they are able to write they understood the story and that they can make a connection. However, many students complain the story they just read is not “good” literature.

I should be proud that the students recognize the difference. In Grades 9 & 10, they are fed a steady diet of great literature: The Odyssey, Of Mice and Men, Romeo and Juliet, All Quiet on the Western Front, Animal Farm, Oliver Twist. The students develop an understanding of characterization. They are able to tease out complex themes and identify “author’s craft”. We read the short stories “The Interlopers” by Saki, “The Sniper” by Liam O´Flaherty, or “All of Summer in a Day” by Ray Bradbury. We practice the CAPT good literature question with these works of literature. The students generally score well.

But when the students are asked to do the same for a CAPT story like the 2011 story “The Dog Formerly Known as Victor Maximilian Bonaparte Lincoln Rothbaum”, they are uncomfortable trying to find the same rich elements that make literature good. A few students will be brave enough to take on the question with statements such as:

  • “Because these characters are nothing like Lenny and George in Of Mice and Men…”
  • “I am unable to find one iota of author’s craft, but I did find a metaphor.”
  • “I am intelligent enough to know that this is not ‘literature’…”

I generally caution my students not to write against the prompt. All the CAPT released exemplars are ripe with praise for each story offered year after year. But I also recognize that calling the stories offered on the CAPT “literature” promotes intellectual dishonesty.

Perhaps the distinction between literature and story is not the biggest problem that students encounter when they take a CAPT Response to Literature. For at least one more year students will handwrite all responses under timed conditions: read a short story (30 minutes) and answer four questions (40 minutes). Digital platforms will be introduced in 2014, and that may help students who are becoming more proficient with keyboards than pencils.
But even digital platforms will not halt the other significant issue with one other question, the “Connection question (#3)” on the CAPT Response to Literature:

 What does this story say about people in general? In what ways does it remind you of people you have known or experiences you have had?  You may also write about stories or books you have read or movies, works of art, or television programs you have seen.  Use examples from the story to explain your thinking.

Inevitably, a large percentage of students write about personal experiences when they make a connection to the text. They write about “friends who have had the same problem” or “a relative who is just like” or “neighbors who also had trouble”.  When I read these in practice session, I sometimes comment to the student, “I am sorry to hear about____”.

However, the most frequent reply I get is often startling.

“No, that’s okay. I just made that up for the test.”

At least they know that their story, “an account of imaginary or real people and events told for entertainment,” is not literature, either.

test

Standardized testing in Connecticut begins next month. The 10th grade students who are taking a reading comprehension practice test all look like they are engaged. Their heads are bent down; they are marking the papers.  I am trying to duplicate test taking conditions to prepare them for these exams. I also want to compare the scores from this assessment to one taken earlier in the year to note their progress.

Next month, these these students will sit in the same seats, for the same amount of time, perhaps using the same pen or pencil, but they are not the “same”. That is because they are adolescents. They are going through physical changes. They are going through emotional changes. They are are going through a period of social adjustment. Outwardly, they may look calm, but the turbulence inside is palpable.

I imagine if I could tune into their inner monologues, the cacophony would be deafening:

  • “…missed the bus!!!! No time for breakfast this morning…”
  • “…this is the biggest zit I have ever had!…”
  • “…not ready for the math test tomorrow…”
  • “….did I make the team?…”
  • “…why didn’t I get that part in the play?…”
  • “…I forgot the science homework!..”
  • “…When this test was over, I’ve got to find out who he is taking to the dance!..”
  • “…what am I going to do when I grow up?..”
  • “…should I get ride home or should I take the late bus?…”
  • “…Is she wearing the same shirt as me?…”

These students take the practice assessment like other classes of students before them. Unlike generations of students before them, however, social media makes a significant contribution to their behavior. Their access to social media updates with Facebook posts, tweets, or text messages exacerbates the turmoil and creates a social, emotional, hormonal slurry that changes hourly. 

And very soon, in one of those hours, these students will take a real state standardized test.

These factors may explain why the highs and lows of my data collection for several students bear a closer resemblance to an EKG rather than a successful corporate stock report. I may not want to count the results of an assessment for a student because I know what may have gone wrong on that day. However, the anecdotal information I have for a given student on a given day student is not recorded in the collection of numbers; measuring student performance is exclusively the number of items right vs. the number of items wrong.

Yet, there is still truth in the data. When the individual student results are combined as a class, student A’s bad day is mitigated by Student B’s good day. The reverse may be true the following week. Averaging Student A’s results with all the other members of the class, neutralizes many of the individual emotional or hormonal influences. Collectively, the effects of adolescence are qualified, and I can analyze a group score that measures understanding. Ultimately, the data averaged class by class, or averaging a student’s ups and downs, is more reliable in providing general information about growth over time.

Although I try to provide the ideal circumstances in order to optimize test scores, I can never exclude that social, emotional, hormonal slurry swirling in each of their heads. I know that the data collected on any given day might be unreliable in determining an individual student’s progress. I cannot predict the day or hour when a student should take a test to measure understanding.

How unfortunate that this is exacty what happens when students take a state standardized test on a predetermined date during an assigned hour, regardless of what turmoil might be going on in their lives. How unfortunate when that the advocates of standardized testing are never in the classroom to hear the voices in the adolescent students’ internal monologues:“….I am so tired!…..When will this be over?…Does this test really show what I know?”