Archives For SAT

Notice how I am trying to beat the character limit on headlines?

Here’s the translation:

For your information, Juniors: Connecticut’s Common Core State Standards Smarter Balanced Assessment [Consortium] is Dead on Arrival; Insert Scholastic Achievement Test

Yes, in the State of Connecticut, the test created through the Smarter Balanced Assessment Consortium (SBAC) based on the Common Core State Standards will be canceled for juniors (11th graders) this coming school year (2015-16) and replaced by the Scholastic Achievement Test  (SAT).

The first reaction from members of the junior class should be an enormous sigh of relief. There will be one less set of tests to take during the school year. The second sigh will come from other students, faculty members, and the administrative team for two major reasons-the computer labs will now be available year round and schedules will not have to be rearranged for testing sessions.

SAT vs. SBAC Brand

In addition, the credibility of the SAT will most likely receive more buy-in from all stakeholders. Students know what the brand SAT is and what the scores mean; students are already invested in doing well for college applications. Even the shift from the old score of 1600 (pre-2005) to 2400  with the addition of an essay has been met with general understanding that a top score is  800 in each section (math, English, or essay). A student’s SAT scores are part of a college application, and a student may take the SAT repeatedly in order to submit the highest score.

In contrast, the SBAC brand never reported student individual results. The SABC was created as an assessment for collecting data for teacher and/or curriculum evaluation. When the predictions of the percentage of anticipated failures in math and English were released, there was frustration for teachers and additional disinterest by students. There was no ability to retake, and if predictions meant no one could pass, why should students even try?

Digital TestingScantron

Moreover, while the SBAC drove the adoption of digital testing in the state in grades 3-8, most of the pre-test skill development was still given in pen and pencil format. Unless the school district consistently offered a seamless integration of 1:1 technology, there could be question as to what was being assessed-a student’s technical skills or application of background knowledge. Simply put, skills developed with pen and pencils may not translate the same on digital testing platforms.

As a side note, those who use computer labs or develop student schedules will be happy to know that SAT is not a digital test….at least not yet.

US Education Department Approved Request 

According to an early report (2006) by The Brooking’s Institute, the SBAC’s full suite of summative and interim assessments and the Digital Library on formative assessment was first estimated to cost $27.30 per student (grades 3-11). The design of the assessment would made economical if many states shared the same test.

Since that intial report, several states have left the Smarter Balanced Consortium entirely.

In May, the CT legislature voted to halt SBAC in grade ii in favor of the SAT. This switch will increase the cost of testing.According to an article (5/28/15) in the CT Mirror “Debate Swap the SAT for the Smarter Balanced Tests” :

“‘Testing students this year and last cost Connecticut $17 million’, the education department reports. ‘And switching tests will add cost,’ Commissioner of Education Dianna Wentzell said.”

This switch was approved by the U.S. Department of Education for Connecticut schools Thursday, 8/6/15, the CT Department of Education had asked it would not be penalized under the No Child Left Behind Act’s rigid requirements. Currently the switch for the SAT would not change the tests in grades 3-8; SBAC would continue at these grade levels.

Why SBAC at All?

All this begs the question, why was 11th grade selected for the SBAC in the first place? Was the initial cost a factor?

Since the 1990s, the  State of Connecticut had given the Connecticut Achievement Performance Test (CAPT) in grade 10, and even though the results were reported late, there were still two years to remediate students who needed to develop skills. In contrast, the SBAC was given the last quarter of grade 11, leaving less time to address any low level student needs. I mentioned these concerns in an earlier post: The Once Great Junior Year, Ruined by Testing.

Moving the SBAC to junior year increased the amount of testing for those electing to take the SAT with some students taking the ASVAB (Armed Services Vocational Aptitude Battery) or selected to take the NAEP (The National Assessment of Educational Progress).

There have been three years of “trial testing” for the SBAC in CT and there has been limited feedback to teachers and students. In contrast, the results from the SAT have always been available as an assessment to track student progress, with results reported to the school guidance departments.

Before No Child Left Behind, before the Common Core State Standards, before SBAC, the SAT was there. What took so them (legislature, Department of Education, etc) so long?

Every Junior Will Take the NEW SAT

Denver Post: Heller

Denver Post: Heller

In the past, not every student elected to take the SAT test, but many districts did offer the PSAT as an incentive. This coming year, the SAT will be given to every 11th grader in Connecticut.

The big wrinkle in this plan?
The SAT test has been revised (again) and will be new in March 2016.

What should we expect with this test?

My next headline?

OMG. HWGA.

Screen Shot 2014-04-06 at 11.16.51 AMNot so long ago, 11th grade was a great year of high school. The pre-adolescent fog had lifted, and the label of “sophomore,” literally “wise-fool,” gave way to the less insulting “junior.” Academic challenges and social opportunities for 16 and 17 years olds increased as students sought driver’s permits/licenses, employment or internships in an area of interest. Students in this stage of late adolescence could express interest in their future plans, be it school or work.

Yet, the downside to junior year had always been college entrance exams, and so, junior year had typically been spent in preparation for the SAT or ACT. When to take these exams had always been up to the student who paid a base price $51/SAT or $36.50/ACT for the privilege of spending hours testing in a supervised room and weeks in anguish waiting for the results. Because a college accepts the best score, some students could choose to take the test many times as scores generally improve with repetition.

Beginning in 2015, however, junior students must prepare for another exam in order to measure their learning using the Common Core State Standards (CCSS). The two federally funded testing consortiums, Smarter Balanced Assessments (SBAC) or the Partnership for Assessment of Readiness for College and Careers (PARCC) have selected 11th grade to determine the how college and career ready a student is in English/Language Arts and Math.

The result of this choice is that 11th grade students will be taking the traditional college entrance exam (SAT or ACT) on their own as an indicator of their college preparedness. In addition, they will take another state-mandated exam, either the SBAC or the PARRC, that also measures their college and career readiness. While the SAT or ACT is voluntary, the SBAC or PARRC will be administered during the school day, using 8.5 hours of instructional time.

Adding to these series of tests lined up for junior year are the Advanced Placement exams. There are many 11th grade students who opt to take Advanced Placement courses in a variety of disciplines either to gain college credit for a course or to indicate to college application officers an academic interest in college level material. These exams are also administered during the school day during the first weeks of May, each taking 4 hours to complete.

One more possible test to add to this list might be the Armed Services Vocational Aptitude Battery (ASVAB test) which, according to the website Today’s Military,  is given to more than half of all high schools nationwide to students in grade 10th, 11th or 12th, although 10th graders cannot use their scores for enlistment eligibility.

The end result is that junior year has gradually become the year of testing, especially from the months of March through June, and all this testing is cutting into valuable instructional time. When students enter 11th grade, they have completed many pre-requisites for more advanced academic classes, and they can tailor their academic program with electives, should electives be offered. For example, a student’s success with required courses in math and science can inform his or her choices in economics, accounting, pre-calculus, Algebra II, chemistry, physics, or Anatomy and Physiology. Junior year has traditionally been a student’s greatest opportunity to improve a GPA before making college applications, so time spent learning is valuable. In contrast, time spent in mandated testing robs each student of classroom instruction time in content areas.

In taking academic time to schedule exams, schools can select their exam (2 concurrent) weeks for performance and non-performance task testing.  The twelve week period (excluding blackout dates) from March through June is the nationwide current target for the SBAC exams, and schools that choose an “early window” (March-April) will lose instructional time before the Advanced Placement exams which are given in May. Mixed (grades 11th & 12th) Advanced Placement classes will be impacted during scheduled SBACs as well because teachers can only review past materials instead of progressing with new topics in a content area. Given these circumstances, what district would ever choose an early testing window?  Most schools should opt for the “later window” (May) in order to allow 11th grade AP students to take the college credit exam before having to take (another) exam that determines their college and career readiness. Ironically, the barrage of tests that juniors must now complete to determine their “college and career readiness” is leaving them with less and less academic time to become college and career ready.

Perhaps the only fun remaining for 11th graders is the tradition of the junior prom. Except proms are usually held between late April and early June, when -you guessed it- there could be testing.

The recent invitation to respond to the statement “Don’t Teach the Test” was under discussion in the New York Times: Invitation to a Dialogue series. The question was posed by Peter Schmidt,  the director of studies at Gill St. Bernard’s School, and he singled out two tests in particular: the SAT and the Advanced Placement Tests.

Schmidt suggested that the SAT should be eliminated as a requirement for college on the basis of economic inequality.  Students who have the finances to take prep courses or hire tutors have an advantage, and Schmidt suggests that, “our colleges are further promoting the inequities of our society.”

Schmidt also called for the end to  Advanced Placement  (A.P.) courses in high school, saying that they

“too often fail to prepare students adequately for college-level course work. They also put pressure on students to perform well on the A.P. exams in the spring, leaving them exhausted and lacking a spirit of intellectual curiosity.”

Full disclosure: I teach Advanced Placement English Literature, and I have served as an A.P. Reader.

AP TestsThat said, I believe Schmidt is right about the pressure the testing for these courses places on students. I agree that these students are exhausted the first two weeks of May since students who take A.P. courses often take more than one A.P. class. Many students are scheduled for two separate tests on the same day. But as to his assessment that the A.P. courses do not prepare students for college level work, I must respectfully disagree.

Students who take A.P. courses recognize that they may or may not receive college credit for the course. College credit is given based on a student’s test score (minimum a level “3” on the A.P. English Language or Literature) and the willingness of the college to accept that score in lieu of an undergraduate course.  As a result, there are no guarantees of college credit in an A.P. class; however, colleges do look to see if students are taking A.P. classes as an indication of their academic ambitions.

The A.P. exams in all subject areas are a mix of multiple choice questions and essay responses questions.  In the A.P. English Literature exam, there are 55 challenging questions on five or six literature selections. Students need a command of vocabulary and the ability to “close read”, a skill that was the hallmark of A.P. courses long before the Common Core State Standards. But the most demanding part of the AP English Literature exam is the essay section where students write three essays in response to three prompts in two hours. 

My students practice writing to these prompts throughout the school year.  They learn to read, annotate, and draft quickly, but Schmidt raises a good question.

Does the A.P. test prepare students for college?

In responding to Schmidt’s concern, I have thought about how my student’s responses to the essay test questions are not the only measure for determining student understanding. A good A.P. course incorporates the practice of revising drafts written for a practice test. There is always a gem of an idea in these hasty constructions. There is always some hypothesis that student will discover as he or she “writes into” the prompt, something I have previously referred to as a “manifesto in the muck.” A good A.P. course provides a student with the chance to take that essay draft, and expand and revise. A good A.P. course gives students the chance to start again with the end of the draft in order to begin a better essay.

Schmidt complains about “the  lack of  imagination and creativity” that “are the cornerstones of genuine learning,”but these generalizations are not true.  I know first-hand that there is nothing to stop a student’s imagination or creativity in responding to a work of literature in an A.P. course. Some of the most amazing statements or ideas I have read have come from students undergoing the intellectual crucible of writing an organized essay in under 40 minutes. In reading these practice drafts, some ripe with grammatical errors and misspellings,  I will pause with my red pen suspended, repeating to myself, “First do no harm”, as I leave a draft untouched. A.P. advises instructors to “reward the student for what they do well”, even on a practice test.

There are too many reasons to not like the standardized tests that are choking education today; the limited data that standardized testing yields is often not worth the time and expense. Frankly, I am no fan of the College Board. The limitations of the A.P. test, however, does not mean that an A.P. course is not valuable.

The A.P. test, like all standardized tests, is a single metric measure, but an A.P. course is a much broader experience.  So, yes, I teach to the test, but I also teach the A.P. course as a preparation for the rigors of college level work, and in particular, I teach the course so that my students will have the option to waive a 100 level composition class giving them the option to take a course in their major field of study.

Schmidt concluded his invitation with an impassioned plea,

As E. M. Forster wrote more than a century ago in Howards End, in addressing the shortcomings of British universities: “Oh yes, you have learned men who collect … facts, and facts, and empires of facts. But which of them will rekindle the light within?”

I would argue that my A.P. class is the only place in my curriculum where I can offer the writings of E.M Forster, if for no other reason than to see how students would respond to that literary prompt.  I know that in their responses, there could be one from a student who, writing under intense pressure, could draft a sentence or two that would reveal a “kindle of light within.”  Whether that student response would be in a test booklet written during the A.P. test or not does not matter.

Open House: OMG!

September 15, 2013 — 1 Comment

September is Open House Month, and the welcoming speech from a teacher could sound like this:

“Welcome, Parents! Let me show you how to access my website on the SMARTboard where you can see how the CCSS are aligned with our curriculum. You can monitor your child’s AYP by accessing our SIS system, Powerschool. In addition, all of our assignments are on the class wiki that you can access 24/7.  As we are a BYOD school, your child will need a digital device with a 7″ screen to use in class.”

OMG!

How parents may feel during Open House listening to education acronyms

The result of such a speech is that parents may feel like students all over again. The same people who sat in desks, perhaps only a few years ago, now are on another side of the classroom experience, and the rapid changes caused by the use of technology in education necessitate a need for education primer, a list of important terms to know. While attending the Open House, parents can observe that there are still bulletin boards showcasing student work. They can note how small the desks appear now, if there are desks. Perhaps the lunch lady is the same individual who doled out applesauce and tater tots onto their school lunch trays.  Yet, listening to how instruction is delivered, monitored, and accessed may make parents feel that they are in some alien experience with instructors and administrators spouting a foreign language. Just what is a wiki? they may wonder, and what does BYOD stand for?

So, let’s begin with some of the acronyms.  At Open House, educators may casually throw around some of the following terms to explain what they teach or how they measure what they teach:

  • PBL (Project Based Learning) a hands-on lesson;
  • SIS (Student Information System);
  • Bloom’s Taxonomy: a sequence of learning based on complication of task and level of critical thinking which is being replaced by the DOK;
  • DOK (Depths of Knowledge) complication of task and level of critical thinking required
  • ESL (English as a Second Language);
  • AYP (Adequate Yearly Progress);
  • WIKI: a web application which allows people to add, modify, or delete content in a collaboration with others; and
  • SMARTboard: interactive white board

Subject area names may also seem unfamiliar since they now reflect a different focus on areas in education. English is now ELA (English/Language Arts) while science and math have merged like the Transformers into the mighty STEM (Science, Technology, Engineering, and Math). The old PE class may now bear the moniker Physical Activity and Health (PAH), but  History has already dealt with the shift to the more inclusive term Social Studies coined in the 1970s.

Assessment (testing) brings about another page in the list of education acronyms that parents may hear on Open House, including these few examples:

DRP (Degrees of Reading Power) reading engagement, oral reading fluency, and comprehension younger elementary students;
DRA (Developmental Reading Assessment) reading engagement, oral reading fluency, and comprehension in elementary and middle grade students;
STAR: new skills-based test items, and new in-depth reports for screening, instructional planning, progress monitoring;
PSAT/SAT/ACT:designed to assess student academic readiness for college 

Parents, however, should be aware that they are not alone in their confusion. Educators often deal with acronym duplication, and  state by state the abbreviations may change. In Connecticut, some students have IEPs (Individual Education Plans), but all students have SSP (Student Success Profiles) which shares the same acronym with the SSP (Strategic School Profile). Connecticut introduced the teacher evaluation program SEED known as the System for Educator Evaluation and Development, which is an acronym not to be confused with SEED, a partnership with urban communities to provide educational opportunities that prepare underserved students for success in college and career.

Federal programs only add to the list of abbreviations. Since 1975, students have been taught while IDEA (Individuals with Disabilities Education Act) has been implemented. NCLB (No Child Left Behind) has been the dominating force in education for the length of the Class of 14’s time in school, along with its partner SSA (Student Success Act) which is similar to, but not exactly like, the SSP mentioned earlier. The latest initiative to enter the list of reform movements that parents should know  is known as the CCSS the Common Core State Standards.

The CCSS are academic standards developed in 2009 and adopted by 45 states in order to provide “a consistent, clear understanding of what students are expected to learn, so teachers and parents know what they need to do to help them.” Many of the concepts in the CCSS will be familiar to parents, however, the grade level at which they are introduced may be a surprise. Just as their parents may have been surprised to find the periodic tables in their 5th grade science textbooks, there are many concepts in math (algebra) and English (schema) that are being introduced as early as Kindergarten.

So when a student leaves in the morning with a digital device for school, BYOD or BYOT (Bring Your Own Technology) and sends a “text” that they will be staying late for extra help or extra-curricular activities, parents should embrace the enhanced communication that this Brave New World of technology in education is using. If at Open House a parent needs a quick explanation of the terms being used by a teacher, he should raise his hand;  in spite of all these newfangled terms and devices, that action still signals a question.

Above all, parents should get to know the most important people in the building: the school secretary (sorry, the Office Coordinator) and the school custodian (sorry, FMP: Facility Maintenance Personnel). They know where your child left her backpack.

hairy handSince many college applications are due between January 1 and February 1, I know that many of my students are fretting about their SAT scores. I wish I could tell them to relax, that the score is just a score, and that they will never have to hear the words SAT again, but that would not be telling them the truth. The hairy hand of the SAT can reach far forward into their future. An SAT score is a brand, locking academic potential in a data point where we are forever 17 years old.

When I took the test, it was known as the Scholastic Aptitude Test, and that was before it became known as the Scholastic Assessment Test. At that time, the top score was a 1600, and there was no writing section. There were no pre-tutoring sessions from pricey tutors available after school or on Saturdays to practice for the SAT. I think I glanced through a practice book.

That Saturday morning, I was dropped off by my father in our 68 VW van along with hundreds of equally bleary-eyed seniors. I think I paid that day because I waited for him to write out a check. About two hours later in the middle of the math section, I remember thinking “Whoa…maybe I should have studied for this.” I had approached this milestone in my life with a little too much confidence and too little breakfast. I came out of that ordeal exhausted and starved.

Some 38 years later, I am still reminded about the results from that day. For example, on applications to graduate school, there is always a question on my score on the SATs taken back in 1974.

“Really?” I think to myself, “I am so much better a student today. I have two graduate degrees, and I am gainfully employed in the field of education. I am a very differently educated person from my 17 year old self. Then I was financially strapped, working part-time in a pizza restaurant, and I had yet to attend my first rock concert. Yet, you still want to know what my high school SAT score was?”

While I am not ashamed of my score, I am not posting it, either. Fortunately, because of my SAT score, I have been able to waive out of other standardized tests, for example, the Praxis I in Connecticut which requires a combined minimum score of 1000. You can be content to know I met this minimum standard with several hundred points to spare. I did very well on the verbal, but in retrospect, I probably could have done better had I prepared for the math section a little more.

So when I come to that question on an application, I think how that score taken when I was 17 one cold spring morning cannot accurately reflect who I am today. Nor do I think that an SAT score accurately reflects who my students are either. At this time of year, I hear them discuss numbers as they explain why they may or may not, or did or did not, get into a college of their choice. Sometimes I am surprised to hear particularly high or low scores, however, this information never changes my opinion of the student I have seated in my class. A student with a particularly high SAT score may never turn a paper in on time and have a failing grade while a student with a low SAT score may have an “A” in my class because every assignment is done on time or revised when recommended. The SATs may be an “indicator”, but these are students, not numbers. The score on an SAT can still fall subject to human error.

I do not think at age 17 that I fully understood how far forward into my future the hairy hand of the SAT would travel. I doubt my students understand, but I hope they know that their future will not depend on their 17 year old academic selves.

I suppose I should be grateful that when I am asked for my SAT score, that there is not also a request for   additional identification, say, a picture of me in that decade. That thought is chilling. The hiphuggers, bell bottoms, velvet jackets, and ubiquitous leotards of my high school decade are positively comical.My yellow chiffon prom dress is particularly hilarious. On the whole, I’d rather they see my SAT score.

 David Coleman, incoming president of the College Board is staring out from the front cover of the October 2012 issue of The Atlantic . Actually, he is not staring. I think he is smirking…a Cheshire Cat smirk.

He has every reason to smirk. Coleman one of the architects of the Common Core State Standards has emerged as one of the more influential education policymakers to change what will be taught in classrooms and how this content will be taught without ever having spent time in the classroom himself.

Yes, Coleman has never taught in a public school classroom, although he was very successful as a student. He was educated in the Manhattan public school system, the son of highly educated parents, his father, a psychiatrist, and his mother, president of Bennington College. His privileged liberal arts credentials are immersive and include Yale, a Rhodes Scholarship, Oxford, and Cambridge.

His perspective on education has been informed by the business side of education which included pro-bono work at the management consulting firm McKinsey & Company. He developed and sold the assessment company Grow Network; co-founded and sold Student Achievement Partner; and most recently, accepted a position as president of The College Board.

Coleman has materialized, like Lewis Carroll’s enigmatic Cheshire Cat, as the cool outsider who surveys education as a Wonderland ruled by nonsense. He has promoted an agenda of close reading and an increase in non-fiction, to a ratio of 70% of all required reading by grade 12, from his perch high above the daily dust-ups of the average classroom.

Now, after developing the CCSS, replete with new batteries of state tests, he has moved on to the pinnacle of high stakes testing, the SAT. His arrival comes amid renewed concerns from studies about the SAT that demonstrate the unfairness of the test for minorities, females, and students living in poverty.

While I can embrace many of the standards in the English Language Arts Common Core State Standards (CCSS),  I remain unconvinced by Coleman’s sweeping claims that “close reading” lessons  of several days focused on a complex and difficult text is critical to improving understanding. I have practiced close reading, but not with the singular and tortuous focus Coleman advocates. There is little research as to how this approach will improve reading skills for all students. For 21 years, I have been a “boots on the ground” promoter of reading to a population of students who are reading less and less of the assigned materials, so I speak from experience when I state that Coleman’s emphasis on close reading can have an adverse effect on an already poor reader.

Furthermore, Coleman negates the effectiveness of the past 35 years of having students engage with a text using Louise Rosenblatt’s Reader Response Theory. His blunt charge “as you grow up in this world you realize people really don’t give a sh*t about what you feel or what you think” is simply not true. I cannot imagine any author who would not want to know what a reader thought. Writing is supposed to inspire; writing is an invitation to a dialogue. Furthermore, how will not listening to what students thought engage them in writing at all?

The question is how did Coleman get to place his large footprint on education, and why did teachers let him move into this position? Were teachers so preoccupied with teaching that they failed to see how the dynamics of education were moving from engaging leaders from public school institutions to accepting leadership from more commercial enterprises?

Dennis Van Roekel alluded to the rise of Coleman and others like him when he delivered an address to the National Education Association 91st Representative Assembly this past July:

Are we willing to assert our leadership, and take RESPONSIBILITY for our professions?
The demands of our work are changing as our students change, and the world around us is changing too – ever so fast.I say it is time for us to lead the next generation of professionals – in educating the next generation of students!

I’m so tired of OTHERS defining the solutions… without even asking those who do the work every day of their professional life.
I want to take advantage of this opportunity for US to lead – and I’m not waiting to be asked, nor am I asking anyone’s permission.

Because if we are not ready to lead, I know there are many others ready, willing, and waiting to do it for us. Or maybe I should say, do it “to” us.

Van Roekel’s quote echoes the question rhetorically posed by noted educator Lucy Caulkins at her presentation of the 82nd reunion at Columbia Teacher’s College, “Where is the proof, David Coleman, that your strategy works?”

Coleman’s ascent to the top of American education policy has been steady. He made contributions to the CCSS which will result in nationwide metrics for grades K-12. Add this testing to his new control of the SAT, and his influence on American education and the tests that measure learning will continue through the college level, all without his having the informative experience of teaching in a classroom. That any one individual without any teaching experience could have had this impact on the daily workings of the classroom is a commentary on the current state of madness that public education now finds itself.

At one point in in her Adventures in Wonderland, Alice comes across the Cheshire Cat in the hope of finding her way out:

‘But I don’t want to go among mad people,’ Alice remarked.
‘Oh, you can’t help that,’ said the Cat: ‘we’re all mad here. I’m mad. You’re mad.’
‘How do you know I’m mad?’ said Alice.
‘You must be,’ said the Cat, ‘or you wouldn’t have come here.’

Carroll’s Cheshire Cat character is a tease, an enigmatic riddler who offers judgments and cryptic clues but no  solution to the frustrated Alice. Coleman is education’s Cheshire Cat, offering positions in education but with no evidence to prove his solutions will work.

Curiouser and curiouser. David Coleman has become one of the most influential educational policymakers in our public school systems, but at this time, we have little else but his smirk.

Is this the Age of Enlightenment? No.
Is this the Age of Reason? No.
Is this the Age of Discovery? No.

This is the Age of Measurement.

Specifically, this is the age of measurement in education where an unprecedented amount of a teacher’s time is being given over to the collection and review of data. Student achievement is being measured with multiple tools in the pursuit of improving student outcomes.

I am becoming particularly attuned to the many ways student achievement is measured as our high school is scheduled for an accreditation visit by New England Association of Schools and Colleges(NEASC) in the Spring of 2014. I am serving as a co-chair with the very capable library media specialist, and we are preparing the use of school-wide rubrics.

Several of our school-wide rubrics currently in use have been designed to complement scoring systems associated with our state tests,  the Connecticut Mastery Tests (CMT) or Connecticut Academic Performance Tests (CAPT). While we have modified the criteria and revised the language in the descriptors to meet our needs, we have kept the same number of qualitative criteria in our rubrics. For example, our reading comprehension rubric has the same two scoring criteria as does the CAPT. Where our rubric asks students to “explain”, the CAPT asks students to “interpret”. The three rating levels of our rubric are “limited”, “acceptable”, and  “excellent” while the CAPT Reading for Information ratings are “below basic”, “proficient”, and “goal”.

We have other standardized rubrics, for example, we have rubrics that mimic the six scale PSAT/SAT scoring for our junior essays, and we also have rubrics that address the nine scale Advanced Placement scoring rubric.

Our creation of rubrics to meet the scoring scales for standardized tests is not an accident. Our customized rubrics help our teachers to determine a student’s performance growth on common assessments that serve as indicators for standardized tests. Many of our current rubrics correspond to standardized test scoring scales of 3, 6, or 9 points, however, these rating levels will be soon changed.

Our reading and writing rubrics will need to be recalibrated in order to present NEASC with school-wide rubrics that measure 21st Century Learning skills; other rubrics will need to be designed to meet our topics. Our NEASC committee at school has determined that (4) four-scale scoring rubrics would be more appropriate in creating rubrics for six topics:

  • Collaboration
  • Information literacy*
  • Communication*
  • Creativity and innovation
  • Problem solving*
  • Responsible citizenship

These six scoring criteria for NEASC highlight a gap of measurement that can be created by relying on standardized tests, which directly address only three (*) of these 21st Century skills. Measuring the other 21st Century skills requires schools like ours to develop their own data stream.

Measuring student performance should require multiple metrics. Measuring student performance in Connecticut, however, is complicated by the lack of common scoring rubrics between the state standardized tests and the accrediting agency NEASC. The scoring of the state tests themselves can also be confusing as three (3) or six (6) point score results are organized into bands labelled 1-5. Scoring inequities could be exacerbated when the CMT and CAPT and similar standardized tests are used in 2013 and 2014 as 40 % of a teacher’s evaluation, with an additional 5% on whole school performance. The measurement of student performance in 21st Century skills will be addressed in teacher evaluation through the Common Core State Standards (CCSS), but these tests are currently being designed.  By 2015, new tests that measure student achievement according to the CCSS with their criteria, levels, and descriptors in new rubrics will be implemented.This emphasis on standardized tests measuring student performance with multiple rubrics has become the significant measure of student and teacher performance, a result of the newly adopted Connecticut Teacher Evaluation (SEED) program.

The consequence is that today’s classroom teachers spend a great deal of time reviewing of data that has limited correlation between standards of measurement found in state-wide tests (CMT,CAPT, CCSS) with those measurements in nation-wide tests (AP, PSAT, SAT, ACT) and what is expected in accrediting agencies (NEASC). Ultimately valuable teacher time is being expended in determining student progress across a multitude of rubrics with little correlation; yes, in simplest terms, teachers are spending a great deal of time comparing apples to oranges.

I do not believe that the one metric measurement such as Connecticut’s CMT or CAPT or any standardized test accurately reflects a year of student learning; I believe that these tests are snapshots of student performance on a given day. The goals of NEASC in accrediting schools to measure student performance with school-wide rubrics that demonstrate students performing 21st Century skills are more laudable. However, as the singular test metric has been adopted as a critical part of Connecticut’s newly adopted teacher evaluation system, teachers here must serve two masters, testing and accreditation, each with their own separate systems of measurement.

With the aggregation of all these differing data streams, there is one data stream missing. There is no data being collected on the cost in teacher hours for the collection, review, and recalibration of data. That specific stream of data would show that in this Age of Measurement, teachers have less time for /or to work with students; the kind of time that could allow teachers to engage students in the qualities from ages past: reason, discovery, and enlightenment.