Archives For CMT

It’s official.

The chocolate milk debate  as a test writing prompt is dead in Connecticut to all grade levels.choclate-milk

Yes, that old stalwart, “Should there be chocolate milk in schools?” offered to students as a standardized writing prompt was made null and void with one stroke from Governor Malloy’s pen. According to Hartford Courant, (6/12/14) Malloy Veto Keeps Chocolate Milk On School Lunch Menus,

“to the vast relief of school kids, nutritionists, milk producers and lawmakers, Gov. Dannel P. Malloy used his veto power Thursday to kill a bill that would have banned chocolate milk sales in Connecticut schools.” 

Apparently, the same nutritional charts, editorials, and endorsements from dairy groups organized in packets and given to students from grades 3-11 to teach how to incorporate evidence in a fake persuasive argument under testing conditions was convincing enough to have real CT residents make a persuasive argument for legislators. To show his solidarity with the people, Governor Malloy quaffed down a container of chocolate milk before vetoing a bill that would have banned the sale of chocolate milk in schools.

Standardly, the writing prompt is addressed in English/Language Arts (ELA) class in elementary schools, but in middle and high schools, a persuasive essay is often the responsibility of the social studies teacher. The assumption here is that the skill of persuasion requires research and the incorporation of evidence, both taught in social studies classes. In contrast, ELA classes are dedicated to the analysis of literature through essays using a range of skills: identifying author’s craft, identifying author’s purpose, editing, and revising. The responsibilities for the writing portion of an exam are divided between the ELA classes for the literary analysis essay and the social studies classes for the persuasive essay. This design is intended to promote an interdisciplinary effort, but it is an intellectually dishonest division of labor.

ELA teachers have choices to prepare students for standardized tests using ELA content (literature and grammar) to improve skills. Math and science teachers are also tied to their disciplines’ content in order for their students to be prepared.  Social studies is the only core discipline with the test-prompt disconnect.

So, what topics might test creators design to replace the infamous chocolate milk debate prompt? Before test creators start manufacturing new and silly debates, there is a window of opportunity where attention could be brought to this disconnect between content and testing in writing. Here is the moment where social studies teachers should point out to test creators the topics from their curriculum that could be developed into writing prompts. Here is a foot in the door for the National Council for the Social Studies to introduce writing prompts that complement their content. For example, there could be prompts about Egyptian culture, prompts on the American Revolution, or prompts about trade routes and river based communities. Too often, social studies teachers must devote class time to topics unrelated to curriculum.

The Smarter Balanced Assessment Field Test given this past spring (2014) to 11th graders was about the use of social media by journalists. When they took the test, I overheard the following exchange:

“Of course they use social media,” grumbled one student, “who is going to stop them?”
“Do they think they are ‘cool’ because they mentioned Twitter?” countered another.

Previous standardized test writing prompts (in Connecticut, the CMT and CAPT) for high school and middle school have been devoted to asking students to write persuasively on the age students should be able to drive; whether wolves should be allowed in Yellowstone National Park or not; whether to permit the random drug testing of high school students; and whether there should be uniforms required in schools.

Please notice that none of these aforementioned prompts are directly related to the content in any social studies curricula. Furthermore, the sources prepared as a database for students to use as evidence in responding to these are packets with newspaper opinion columns or polls, and statistical charts; there is no serious research required.

Here is the moment when social studies teachers and curriculum leaders need to point out how academically dishonest the writing prompt is on a standardized test as a measure of their instruction in their discipline. No longer should the content of social studies be abandoned for inauthentic debate.

The glass in Connecticut is half-full now that students can have chocolate milk in schools. Time for test creators to empty out the silly writing prompts that have maddened social studies teachers for years.

Time to choose content over chocolate.

 

As the Connecticut State Standardized tests fade into the sunset, teachers are learning to say “Good-bye” to all those questions that ask the reader to make a personal connection to a story. The incoming  English Language Arts Common Core Standards (ELA- CCSS) are eradicating the writing of responses that begin with, “This story reminds me of…..” Those text to self, text to text, and text to world connections that students have made at each grade level are being jettisoned. The newly designed state assessment tests will tolerate no more fluff; evidence based responses only, please.

sunsetPerhaps this hard line attitude towards literacy is necessary correction. Many literacy experts had promoted connections to increase a reader’s engagement with a text. For example,

 “Tell about the connections that you made while reading the book. Tell how it reminds you of yourself, of people you know, or of something that happened in your life. It might remind you of other books, especially the characters, the events, or the setting” (Guiding Readers and Writers Grades 3-6, Fountas and Pinnell) 

Unfortunately, the question became over-used, asked for almost every book at each grade level. Of course, many students did not have similar personal experiences to make a connection with each and every text. (Note: Given some of the dark literature-vampies, zombies- that adolescents favor, not having personal experience may be a good sign!) Other students did not have enough reading experience or the sophistication to see how the themes in one text were similar to themes in another text.  Some of the state assessment exemplars revealed how students often made limited or literal connections, for example:”The story has a dog; I have a dog.”

The requirement to make a connection to each and every story eventually led to intellectual dishonesty.  Students who were unable to call to mind an authentic connection faked a relationship or an experience. Some students claimed they were encouraged by their teachers to “pretend” they knew someone just like a character they read about. “Imagine a friend had the same problem,” they were told.   Compounding this problem was the inclusion of this connection question on the state standardized tests, the CAPT (grade 10) and the CMT (grades 3-8). So, some  students traded story for story in their responses, and they became amazingly creative in answering this question. I mentioned this in a previous post when a student told me that the sick relative he had written about in a response didn’t really exist. “Don’t worry,” he said brightly after I offered my condolences, “I made that up!”

Last week, our 9th grade students took a practice standardized test with the “make a connection question” as a prompt. They still need to practice since there is one more year of this prompt before ELA CCSS assessments are in place. The students wrote their responses to a story where the relationship between a mother and daughter is very strained. One of the students wrote about her deteriorating and very difficult relationship with her mother. I was surprised to read how this student had become so depressed and upset about her relationship with her mother. I was even more surprised that afternoon when that same mother called to discuss her daughter’s grade. I hesitated a little, but I decided to share what was written in the essay as a possible explanation. The next day, I received the following e-mail,

“I told M___that I read the practice test where she said I didn’t have time to talk and other things were more important. She just laughed and said that she had nothing in common with the girl in the story so she just made that up because she had to write something. We had a good laugh over that and I felt so relieved that she didn’t feel that way.”

After reading so many student “make a connection” essays, I should have seen that coming!

Good-bye, “Make a Connection” question. Ours was an inauthentic relationship; you were just faking it.

Is this the Age of Enlightenment? No.
Is this the Age of Reason? No.
Is this the Age of Discovery? No.

This is the Age of Measurement.

Specifically, this is the age of measurement in education where an unprecedented amount of a teacher’s time is being given over to the collection and review of data. Student achievement is being measured with multiple tools in the pursuit of improving student outcomes.

I am becoming particularly attuned to the many ways student achievement is measured as our high school is scheduled for an accreditation visit by New England Association of Schools and Colleges(NEASC) in the Spring of 2014. I am serving as a co-chair with the very capable library media specialist, and we are preparing the use of school-wide rubrics.

Several of our school-wide rubrics currently in use have been designed to complement scoring systems associated with our state tests,  the Connecticut Mastery Tests (CMT) or Connecticut Academic Performance Tests (CAPT). While we have modified the criteria and revised the language in the descriptors to meet our needs, we have kept the same number of qualitative criteria in our rubrics. For example, our reading comprehension rubric has the same two scoring criteria as does the CAPT. Where our rubric asks students to “explain”, the CAPT asks students to “interpret”. The three rating levels of our rubric are “limited”, “acceptable”, and  “excellent” while the CAPT Reading for Information ratings are “below basic”, “proficient”, and “goal”.

We have other standardized rubrics, for example, we have rubrics that mimic the six scale PSAT/SAT scoring for our junior essays, and we also have rubrics that address the nine scale Advanced Placement scoring rubric.

Our creation of rubrics to meet the scoring scales for standardized tests is not an accident. Our customized rubrics help our teachers to determine a student’s performance growth on common assessments that serve as indicators for standardized tests. Many of our current rubrics correspond to standardized test scoring scales of 3, 6, or 9 points, however, these rating levels will be soon changed.

Our reading and writing rubrics will need to be recalibrated in order to present NEASC with school-wide rubrics that measure 21st Century Learning skills; other rubrics will need to be designed to meet our topics. Our NEASC committee at school has determined that (4) four-scale scoring rubrics would be more appropriate in creating rubrics for six topics:

  • Collaboration
  • Information literacy*
  • Communication*
  • Creativity and innovation
  • Problem solving*
  • Responsible citizenship

These six scoring criteria for NEASC highlight a gap of measurement that can be created by relying on standardized tests, which directly address only three (*) of these 21st Century skills. Measuring the other 21st Century skills requires schools like ours to develop their own data stream.

Measuring student performance should require multiple metrics. Measuring student performance in Connecticut, however, is complicated by the lack of common scoring rubrics between the state standardized tests and the accrediting agency NEASC. The scoring of the state tests themselves can also be confusing as three (3) or six (6) point score results are organized into bands labelled 1-5. Scoring inequities could be exacerbated when the CMT and CAPT and similar standardized tests are used in 2013 and 2014 as 40 % of a teacher’s evaluation, with an additional 5% on whole school performance. The measurement of student performance in 21st Century skills will be addressed in teacher evaluation through the Common Core State Standards (CCSS), but these tests are currently being designed.  By 2015, new tests that measure student achievement according to the CCSS with their criteria, levels, and descriptors in new rubrics will be implemented.This emphasis on standardized tests measuring student performance with multiple rubrics has become the significant measure of student and teacher performance, a result of the newly adopted Connecticut Teacher Evaluation (SEED) program.

The consequence is that today’s classroom teachers spend a great deal of time reviewing of data that has limited correlation between standards of measurement found in state-wide tests (CMT,CAPT, CCSS) with those measurements in nation-wide tests (AP, PSAT, SAT, ACT) and what is expected in accrediting agencies (NEASC). Ultimately valuable teacher time is being expended in determining student progress across a multitude of rubrics with little correlation; yes, in simplest terms, teachers are spending a great deal of time comparing apples to oranges.

I do not believe that the one metric measurement such as Connecticut’s CMT or CAPT or any standardized test accurately reflects a year of student learning; I believe that these tests are snapshots of student performance on a given day. The goals of NEASC in accrediting schools to measure student performance with school-wide rubrics that demonstrate students performing 21st Century skills are more laudable. However, as the singular test metric has been adopted as a critical part of Connecticut’s newly adopted teacher evaluation system, teachers here must serve two masters, testing and accreditation, each with their own separate systems of measurement.

With the aggregation of all these differing data streams, there is one data stream missing. There is no data being collected on the cost in teacher hours for the collection, review, and recalibration of data. That specific stream of data would show that in this Age of Measurement, teachers have less time for /or to work with students; the kind of time that could allow teachers to engage students in the qualities from ages past: reason, discovery, and enlightenment.