Archives For Partnership for Assessment of Readiness for College and Careers (PARCC)

At the intersection of data and evaluation, here is a hypothetical scenario:Screenshot 2014-06-08 20.56.29

A young teacher meets an evaluator for a mid-year meeting.

“85 % of the students are meeting the goal of 50% or better, in fact they just scored an average of 62.5%,” the young teacher says.

“That is impressive,” the evaluator responds noting that the teacher had obviously met his goal. “Perhaps,you could also explain how the data illustrates individual student performance and not just the class average?”

“Well,” says the teacher offering a printout, “according to the (Blank) test, this student went up 741 points, and this student went up….” he continues to read from the  spreadsheet, “81points…and this student went up, um, 431 points, and…”

“So,” replies the evaluator, “these points mean what? Grade levels? Stanine? Standard score?”

“I’m not sure,” says the young teacher, looking a bit embarrassed, “I mean, I know my students have improved, they are moving up, and they are now at a 62.5% average, but…” he pauses.

“You don’t know what these points mean,” answers the evaluator, “why not?”

This teacher who tracked an upward trajectory of points was able to illustrate a trend that his students are improving, but the numbers or points his students receive are meaningless without data analysis. What doesn’t he know?

“We just were told to do the test. No one has explained anything…yet,” he admits.

There will need to be time for a great deal of explaining as the new standardized tests, Smarter Balanced Assessments (SBAC) and the Partnership for Assessment of Readiness for College and Careers (PARCC), that measure the Common Core State Standards (CCSS) are implemented over the next few years. These digital tests are part of an educational reform mandate that will require teachers at every grade level to become adept at interpreting data for use in instruction. This interpretation will require dedicated professional development at every grade level.

Understanding how to interpret data from these new standardized tests and others must be part of every teacher’s professional development plan. Understanding a test’s metrics is critical because there exists the possibility of misinterpreting results.  For example, the data in the above scenario would appear that one student (+741 points) is making enormous leaps forward while another student (+81) is lagging behind. But suppose how different the data analysis would be if the scale of measuring student performance on this particular test was organized in levels of 500 point increments. In that circumstance, one student’s improvement of +741 may not seem so impressive and a student achieving +431 may be falling short of moving up a level. Or perhaps, the data might reveal that a student’s improvement of 81 points is not minimal, because that student had already maxed out towards the top of the scale. In the drive to improve student performance, all teachers must have a clear understanding of how the results are measured, what skills are tested, and how can this information can be used to drive instruction.

Therefore, professional development must include information on the metrics for how student performance will be measured for each different test. But professional development for data analysis cannot stop at the powerpoint!   Data analysis training cannot come “canned,” especially, if the professional development is marketed by a testing company. Too often teachers are given information about testing metrics by those outside the classroom with little opportunity to see how the data can help their practice in their individual classrooms. Professional development must include the conversations and collaborations that allow teachers to share how they could use or do use data in the classroom. Such conversations and collaborations with other teachers will provide opportunities for teachers to review these test results to support or contradict data from other assessments.

Such conversations and collaborations will also allow teachers to revise lessons or units and update curriculum to address weakness exposed by data from a variety of assessments. Interpreting data must be an ongoing collective practice for teachers at every grade level; teacher competency with data will come with familiarity.

In addition, the collection of data should be on a software platform that is accessible and integrated with other school assessment programs. The collection of data must be both transparent in the collection of results and secure in protecting the privacy of each student. The benefit of technology is that digital testing platforms should be able to calculate results in a timely manner in order to free up the time teachers can have to implement changes suggested because of data analysis. Most importantly, teachers should be trained how to use this software platform.

Student data is a critical in evaluating both teacher performance and curriculum effectiveness, and teachers must be trained how to interpret rich pool of data that is coming from new standardized tests. Without the professional development steps detailed above, however, evaluation conversations in the future might sound like the response in the opening scenario:

“We just were told to do the test. No one has explained anything…yet.”

Screen Shot 2014-04-06 at 11.16.51 AMNot so long ago, 11th grade was a great year of high school. The pre-adolescent fog had lifted, and the label of “sophomore,” literally “wise-fool,” gave way to the less insulting “junior.” Academic challenges and social opportunities for 16 and 17 years olds increased as students sought driver’s permits/licenses, employment or internships in an area of interest. Students in this stage of late adolescence could express interest in their future plans, be it school or work.

Yet, the downside to junior year had always been college entrance exams, and so, junior year had typically been spent in preparation for the SAT or ACT. When to take these exams had always been up to the student who paid a base price $51/SAT or $36.50/ACT for the privilege of spending hours testing in a supervised room and weeks in anguish waiting for the results. Because a college accepts the best score, some students could choose to take the test many times as scores generally improve with repetition.

Beginning in 2015, however, junior students must prepare for another exam in order to measure their learning using the Common Core State Standards (CCSS). The two federally funded testing consortiums, Smarter Balanced Assessments (SBAC) or the Partnership for Assessment of Readiness for College and Careers (PARCC) have selected 11th grade to determine the how college and career ready a student is in English/Language Arts and Math.

The result of this choice is that 11th grade students will be taking the traditional college entrance exam (SAT or ACT) on their own as an indicator of their college preparedness. In addition, they will take another state-mandated exam, either the SBAC or the PARRC, that also measures their college and career readiness. While the SAT or ACT is voluntary, the SBAC or PARRC will be administered during the school day, using 8.5 hours of instructional time.

Adding to these series of tests lined up for junior year are the Advanced Placement exams. There are many 11th grade students who opt to take Advanced Placement courses in a variety of disciplines either to gain college credit for a course or to indicate to college application officers an academic interest in college level material. These exams are also administered during the school day during the first weeks of May, each taking 4 hours to complete.

One more possible test to add to this list might be the Armed Services Vocational Aptitude Battery (ASVAB test) which, according to the website Today’s Military,  is given to more than half of all high schools nationwide to students in grade 10th, 11th or 12th, although 10th graders cannot use their scores for enlistment eligibility.

The end result is that junior year has gradually become the year of testing, especially from the months of March through June, and all this testing is cutting into valuable instructional time. When students enter 11th grade, they have completed many pre-requisites for more advanced academic classes, and they can tailor their academic program with electives, should electives be offered. For example, a student’s success with required courses in math and science can inform his or her choices in economics, accounting, pre-calculus, Algebra II, chemistry, physics, or Anatomy and Physiology. Junior year has traditionally been a student’s greatest opportunity to improve a GPA before making college applications, so time spent learning is valuable. In contrast, time spent in mandated testing robs each student of classroom instruction time in content areas.

In taking academic time to schedule exams, schools can select their exam (2 concurrent) weeks for performance and non-performance task testing.  The twelve week period (excluding blackout dates) from March through June is the nationwide current target for the SBAC exams, and schools that choose an “early window” (March-April) will lose instructional time before the Advanced Placement exams which are given in May. Mixed (grades 11th & 12th) Advanced Placement classes will be impacted during scheduled SBACs as well because teachers can only review past materials instead of progressing with new topics in a content area. Given these circumstances, what district would ever choose an early testing window?  Most schools should opt for the “later window” (May) in order to allow 11th grade AP students to take the college credit exam before having to take (another) exam that determines their college and career readiness. Ironically, the barrage of tests that juniors must now complete to determine their “college and career readiness” is leaving them with less and less academic time to become college and career ready.

Perhaps the only fun remaining for 11th graders is the tradition of the junior prom. Except proms are usually held between late April and early June, when -you guessed it- there could be testing.