Archives For Connecticut Academic Performance Test

Rosetta

The Rosetta Stone currently located in The British Museum in London, England.

When I stood in front of the Rosetta Stone in the British Museum in London, I had to wiggle my way through the blockade of tourists who were trying to photograph the small black tablet. Since the stone was encased in glass, I knew the reflections from the camera flashes would result in poor quality photos. Once I had my few seconds before the 2200 year old tablet, I headed off to the gift shop to secure a clear photo of the Rosetta Stone and a small plaster recast of the dark black stone; both yielded far more details than I saw when I was squeezed by the crowd.

The face of the Rosetta Stone, one of two tablets, is etched with three different scripts, each spelling out the same decree issued by King Ptolemy V from Memphis (Egypt) in 196 BCE. These inscriptions translate Ptolemy’s decree in three scripts: the upper text is Ancient Egyptian hieroglyphs, the middle portion Demotic script, and the lowest Ancient Greek. Because the Rosetta Stone presented the same text in each script (with a few minor differences among them), the tablet provided the key to our modern understanding of Egyptian hieroglyphs.

Since the Rosetta Stone is often used as a metaphor for using an essential clue to a new field of knowledge, why not use the Rosetta Stone as a metaphor for explaining the role of data, specifically standardized test data, in informing classroom instruction? Imagine that different stakeholders, (school administrators, teachers, students, parents and test creators ) who look at the results of standardized tests are like those who crowd before the Rosetta Stone trying to decipher its meaning.

The first linguists who worked with the Rosetta Stone were able to look closely, touch and take rubbings of the different alphabets and hieroglyphics as they translated each of the texts. They spent time puzzling over the different alphabets, and they constructed primers to help decode each of the languages. They could see the variations in the engraver’s strokes; they could examine nuances in chisel marks that formed the symbols. As to the contents of the missing or damaged sections, the linguists made educated guesses.

Likewise, in education there are those who are knowledgeable in translating the information from standardized tests, those who have spent time examining data looking for patterns of trends comparing collective or individual student progress over time or perhaps comparing student cohorts. The metaphor of the Rosetta Stone, however, fails in directly comparing the different forms of data collected in the multitude of standardized tests. Each test or assessment is constructed as a single metric; the translations of one standardized test to another are not the same. For example, the state mandated Connecticut Mastery Tests (CMT-grades 3-8)  are not correlated to a diagnostic test for reading such as a diagnostic reading assessment (DAR). The Connecticut Academic Performance Test (CAPT Grade 10) cannot be directly compared to the PSAT or ACT or the NAEP, and none of these standardized tests are comparable to each other.

Consider also how the linguists who studied the Rosetta Stone spent time and lingered over the different interpretations in order to translate the symbols in the differing alphabets. They studied a finite number of symbols that related to a finite statement fixed in time.

In contrast, standardized testing associated with education reform is on the upswing, and today’s educators must review continuous waves of incoming data. Often, when the results are finally released, their value to inform classroom instruction has been compromised. These results serve only to inform educators of what student could do months earlier, not what they are doing in real time. Just like the time stamped images each tourist’s camera records of the Rosetta Stone, standardized tests are just time stamped snapshots of past student performance.

How ironic, then, that so much media attention is given over to the results of the standardized tests in informing the public about student progress. How like the crowds snapping blurry photos around the Rosetta Stone are those who do not understand what exactly what each standardized test measures.

What they should appreciate is that prioritizing the streams of data is key to improving instruction, and the day to day collection of information in a classroom is arguably a more accurate snapshot of student ability and progress.

There are the classroom assessments that teachers record on progress reports/report cards: homework, quizzes, tests, projects that measure student achievement in meeting grade level standards and requirements. Then there is the “third leg” of data, the anecdotal data that can be used to inform instruction. The anecdotal data may be in the form of noting a student sleeping in class (“Has she been up late?”), reviewing a lesson plan that did not work (“I should have used a picture to help them understand”), or reporting a fire drill during testing (“Interruptions distracted the students”). Here the multiple forms of data collected to measure student progress are fluid and always changing, and translating these results is like the linguists’ experience of the hands-on translation of the Rosetta Stone noting the variations and nuances and making educated guesses.

The standardized tests results are most useful in determining trends, and if translated correctly, these results can help educators adjust curriculum and/or instructional strategies. But these test results are antiquated in relation to tracking student learning. Students are not the same day to day, week to week, semester to semester. Their lives are not prescribed in flat symbols, rather students live lives of constant change as they evolve, grow, and learn.

As the Rosetta Stone was critical to understanding texts of the Ancient World, our standardized tests are the “ancient texts” of contemporary education. Standardized tests cannot be the only measurement the public gets to interpret on student and school performance since the results are limited as snapshots of the past. Student and school performance is best understood in looking at the timely combination of all streams of data. To do otherwise is to look at snapshots that are narrow, unchangeable, and, like many of those photos snapped in the British Museum, overexposed.

Is this the Age of Enlightenment? No.
Is this the Age of Reason? No.
Is this the Age of Discovery? No.

This is the Age of Measurement.

Specifically, this is the age of measurement in education where an unprecedented amount of a teacher’s time is being given over to the collection and review of data. Student achievement is being measured with multiple tools in the pursuit of improving student outcomes.

I am becoming particularly attuned to the many ways student achievement is measured as our high school is scheduled for an accreditation visit by New England Association of Schools and Colleges(NEASC) in the Spring of 2014. I am serving as a co-chair with the very capable library media specialist, and we are preparing the use of school-wide rubrics.

Several of our school-wide rubrics currently in use have been designed to complement scoring systems associated with our state tests,  the Connecticut Mastery Tests (CMT) or Connecticut Academic Performance Tests (CAPT). While we have modified the criteria and revised the language in the descriptors to meet our needs, we have kept the same number of qualitative criteria in our rubrics. For example, our reading comprehension rubric has the same two scoring criteria as does the CAPT. Where our rubric asks students to “explain”, the CAPT asks students to “interpret”. The three rating levels of our rubric are “limited”, “acceptable”, and  “excellent” while the CAPT Reading for Information ratings are “below basic”, “proficient”, and “goal”.

We have other standardized rubrics, for example, we have rubrics that mimic the six scale PSAT/SAT scoring for our junior essays, and we also have rubrics that address the nine scale Advanced Placement scoring rubric.

Our creation of rubrics to meet the scoring scales for standardized tests is not an accident. Our customized rubrics help our teachers to determine a student’s performance growth on common assessments that serve as indicators for standardized tests. Many of our current rubrics correspond to standardized test scoring scales of 3, 6, or 9 points, however, these rating levels will be soon changed.

Our reading and writing rubrics will need to be recalibrated in order to present NEASC with school-wide rubrics that measure 21st Century Learning skills; other rubrics will need to be designed to meet our topics. Our NEASC committee at school has determined that (4) four-scale scoring rubrics would be more appropriate in creating rubrics for six topics:

  • Collaboration
  • Information literacy*
  • Communication*
  • Creativity and innovation
  • Problem solving*
  • Responsible citizenship

These six scoring criteria for NEASC highlight a gap of measurement that can be created by relying on standardized tests, which directly address only three (*) of these 21st Century skills. Measuring the other 21st Century skills requires schools like ours to develop their own data stream.

Measuring student performance should require multiple metrics. Measuring student performance in Connecticut, however, is complicated by the lack of common scoring rubrics between the state standardized tests and the accrediting agency NEASC. The scoring of the state tests themselves can also be confusing as three (3) or six (6) point score results are organized into bands labelled 1-5. Scoring inequities could be exacerbated when the CMT and CAPT and similar standardized tests are used in 2013 and 2014 as 40 % of a teacher’s evaluation, with an additional 5% on whole school performance. The measurement of student performance in 21st Century skills will be addressed in teacher evaluation through the Common Core State Standards (CCSS), but these tests are currently being designed.  By 2015, new tests that measure student achievement according to the CCSS with their criteria, levels, and descriptors in new rubrics will be implemented.This emphasis on standardized tests measuring student performance with multiple rubrics has become the significant measure of student and teacher performance, a result of the newly adopted Connecticut Teacher Evaluation (SEED) program.

The consequence is that today’s classroom teachers spend a great deal of time reviewing of data that has limited correlation between standards of measurement found in state-wide tests (CMT,CAPT, CCSS) with those measurements in nation-wide tests (AP, PSAT, SAT, ACT) and what is expected in accrediting agencies (NEASC). Ultimately valuable teacher time is being expended in determining student progress across a multitude of rubrics with little correlation; yes, in simplest terms, teachers are spending a great deal of time comparing apples to oranges.

I do not believe that the one metric measurement such as Connecticut’s CMT or CAPT or any standardized test accurately reflects a year of student learning; I believe that these tests are snapshots of student performance on a given day. The goals of NEASC in accrediting schools to measure student performance with school-wide rubrics that demonstrate students performing 21st Century skills are more laudable. However, as the singular test metric has been adopted as a critical part of Connecticut’s newly adopted teacher evaluation system, teachers here must serve two masters, testing and accreditation, each with their own separate systems of measurement.

With the aggregation of all these differing data streams, there is one data stream missing. There is no data being collected on the cost in teacher hours for the collection, review, and recalibration of data. That specific stream of data would show that in this Age of Measurement, teachers have less time for /or to work with students; the kind of time that could allow teachers to engage students in the qualities from ages past: reason, discovery, and enlightenment.