Archives For ACT

Screen Shot 2014-04-06 at 11.16.51 AMNot so long ago, 11th grade was a great year of high school. The pre-adolescent fog had lifted, and the label of “sophomore,” literally “wise-fool,” gave way to the less insulting “junior.” Academic challenges and social opportunities for 16 and 17 years olds increased as students sought driver’s permits/licenses, employment or internships in an area of interest. Students in this stage of late adolescence could express interest in their future plans, be it school or work.

Yet, the downside to junior year had always been college entrance exams, and so, junior year had typically been spent in preparation for the SAT or ACT. When to take these exams had always been up to the student who paid a base price $51/SAT or $36.50/ACT for the privilege of spending hours testing in a supervised room and weeks in anguish waiting for the results. Because a college accepts the best score, some students could choose to take the test many times as scores generally improve with repetition.

Beginning in 2015, however, junior students must prepare for another exam in order to measure their learning using the Common Core State Standards (CCSS). The two federally funded testing consortiums, Smarter Balanced Assessments (SBAC) or the Partnership for Assessment of Readiness for College and Careers (PARCC) have selected 11th grade to determine the how college and career ready a student is in English/Language Arts and Math.

The result of this choice is that 11th grade students will be taking the traditional college entrance exam (SAT or ACT) on their own as an indicator of their college preparedness. In addition, they will take another state-mandated exam, either the SBAC or the PARRC, that also measures their college and career readiness. While the SAT or ACT is voluntary, the SBAC or PARRC will be administered during the school day, using 8.5 hours of instructional time.

Adding to these series of tests lined up for junior year are the Advanced Placement exams. There are many 11th grade students who opt to take Advanced Placement courses in a variety of disciplines either to gain college credit for a course or to indicate to college application officers an academic interest in college level material. These exams are also administered during the school day during the first weeks of May, each taking 4 hours to complete.

One more possible test to add to this list might be the Armed Services Vocational Aptitude Battery (ASVAB test) which, according to the website Today’s Military,  is given to more than half of all high schools nationwide to students in grade 10th, 11th or 12th, although 10th graders cannot use their scores for enlistment eligibility.

The end result is that junior year has gradually become the year of testing, especially from the months of March through June, and all this testing is cutting into valuable instructional time. When students enter 11th grade, they have completed many pre-requisites for more advanced academic classes, and they can tailor their academic program with electives, should electives be offered. For example, a student’s success with required courses in math and science can inform his or her choices in economics, accounting, pre-calculus, Algebra II, chemistry, physics, or Anatomy and Physiology. Junior year has traditionally been a student’s greatest opportunity to improve a GPA before making college applications, so time spent learning is valuable. In contrast, time spent in mandated testing robs each student of classroom instruction time in content areas.

In taking academic time to schedule exams, schools can select their exam (2 concurrent) weeks for performance and non-performance task testing.  The twelve week period (excluding blackout dates) from March through June is the nationwide current target for the SBAC exams, and schools that choose an “early window” (March-April) will lose instructional time before the Advanced Placement exams which are given in May. Mixed (grades 11th & 12th) Advanced Placement classes will be impacted during scheduled SBACs as well because teachers can only review past materials instead of progressing with new topics in a content area. Given these circumstances, what district would ever choose an early testing window?  Most schools should opt for the “later window” (May) in order to allow 11th grade AP students to take the college credit exam before having to take (another) exam that determines their college and career readiness. Ironically, the barrage of tests that juniors must now complete to determine their “college and career readiness” is leaving them with less and less academic time to become college and career ready.

Perhaps the only fun remaining for 11th graders is the tradition of the junior prom. Except proms are usually held between late April and early June, when -you guessed it- there could be testing.

Is this the Age of Enlightenment? No.
Is this the Age of Reason? No.
Is this the Age of Discovery? No.

This is the Age of Measurement.

Specifically, this is the age of measurement in education where an unprecedented amount of a teacher’s time is being given over to the collection and review of data. Student achievement is being measured with multiple tools in the pursuit of improving student outcomes.

I am becoming particularly attuned to the many ways student achievement is measured as our high school is scheduled for an accreditation visit by New England Association of Schools and Colleges(NEASC) in the Spring of 2014. I am serving as a co-chair with the very capable library media specialist, and we are preparing the use of school-wide rubrics.

Several of our school-wide rubrics currently in use have been designed to complement scoring systems associated with our state tests,  the Connecticut Mastery Tests (CMT) or Connecticut Academic Performance Tests (CAPT). While we have modified the criteria and revised the language in the descriptors to meet our needs, we have kept the same number of qualitative criteria in our rubrics. For example, our reading comprehension rubric has the same two scoring criteria as does the CAPT. Where our rubric asks students to “explain”, the CAPT asks students to “interpret”. The three rating levels of our rubric are “limited”, “acceptable”, and  “excellent” while the CAPT Reading for Information ratings are “below basic”, “proficient”, and “goal”.

We have other standardized rubrics, for example, we have rubrics that mimic the six scale PSAT/SAT scoring for our junior essays, and we also have rubrics that address the nine scale Advanced Placement scoring rubric.

Our creation of rubrics to meet the scoring scales for standardized tests is not an accident. Our customized rubrics help our teachers to determine a student’s performance growth on common assessments that serve as indicators for standardized tests. Many of our current rubrics correspond to standardized test scoring scales of 3, 6, or 9 points, however, these rating levels will be soon changed.

Our reading and writing rubrics will need to be recalibrated in order to present NEASC with school-wide rubrics that measure 21st Century Learning skills; other rubrics will need to be designed to meet our topics. Our NEASC committee at school has determined that (4) four-scale scoring rubrics would be more appropriate in creating rubrics for six topics:

  • Collaboration
  • Information literacy*
  • Communication*
  • Creativity and innovation
  • Problem solving*
  • Responsible citizenship

These six scoring criteria for NEASC highlight a gap of measurement that can be created by relying on standardized tests, which directly address only three (*) of these 21st Century skills. Measuring the other 21st Century skills requires schools like ours to develop their own data stream.

Measuring student performance should require multiple metrics. Measuring student performance in Connecticut, however, is complicated by the lack of common scoring rubrics between the state standardized tests and the accrediting agency NEASC. The scoring of the state tests themselves can also be confusing as three (3) or six (6) point score results are organized into bands labelled 1-5. Scoring inequities could be exacerbated when the CMT and CAPT and similar standardized tests are used in 2013 and 2014 as 40 % of a teacher’s evaluation, with an additional 5% on whole school performance. The measurement of student performance in 21st Century skills will be addressed in teacher evaluation through the Common Core State Standards (CCSS), but these tests are currently being designed.  By 2015, new tests that measure student achievement according to the CCSS with their criteria, levels, and descriptors in new rubrics will be implemented.This emphasis on standardized tests measuring student performance with multiple rubrics has become the significant measure of student and teacher performance, a result of the newly adopted Connecticut Teacher Evaluation (SEED) program.

The consequence is that today’s classroom teachers spend a great deal of time reviewing of data that has limited correlation between standards of measurement found in state-wide tests (CMT,CAPT, CCSS) with those measurements in nation-wide tests (AP, PSAT, SAT, ACT) and what is expected in accrediting agencies (NEASC). Ultimately valuable teacher time is being expended in determining student progress across a multitude of rubrics with little correlation; yes, in simplest terms, teachers are spending a great deal of time comparing apples to oranges.

I do not believe that the one metric measurement such as Connecticut’s CMT or CAPT or any standardized test accurately reflects a year of student learning; I believe that these tests are snapshots of student performance on a given day. The goals of NEASC in accrediting schools to measure student performance with school-wide rubrics that demonstrate students performing 21st Century skills are more laudable. However, as the singular test metric has been adopted as a critical part of Connecticut’s newly adopted teacher evaluation system, teachers here must serve two masters, testing and accreditation, each with their own separate systems of measurement.

With the aggregation of all these differing data streams, there is one data stream missing. There is no data being collected on the cost in teacher hours for the collection, review, and recalibration of data. That specific stream of data would show that in this Age of Measurement, teachers have less time for /or to work with students; the kind of time that could allow teachers to engage students in the qualities from ages past: reason, discovery, and enlightenment.