Archives For testing

Notice how I am trying to beat the character limit on headlines?

Here’s the translation:

For your information, Juniors: Connecticut’s Common Core State Standards Smarter Balanced Assessment [Consortium] is Dead on Arrival; Insert Scholastic Achievement Test

Yes, in the State of Connecticut, the test created through the Smarter Balanced Assessment Consortium (SBAC) based on the Common Core State Standards will be canceled for juniors (11th graders) this coming school year (2015-16) and replaced by the Scholastic Achievement Test  (SAT).

The first reaction from members of the junior class should be an enormous sigh of relief. There will be one less set of tests to take during the school year. The second sigh will come from other students, faculty members, and the administrative team for two major reasons-the computer labs will now be available year round and schedules will not have to be rearranged for testing sessions.

SAT vs. SBAC Brand

In addition, the credibility of the SAT will most likely receive more buy-in from all stakeholders. Students know what the brand SAT is and what the scores mean; students are already invested in doing well for college applications. Even the shift from the old score of 1600 (pre-2005) to 2400  with the addition of an essay has been met with general understanding that a top score is  800 in each section (math, English, or essay). A student’s SAT scores are part of a college application, and a student may take the SAT repeatedly in order to submit the highest score.

In contrast, the SBAC brand never reported student individual results. The SABC was created as an assessment for collecting data for teacher and/or curriculum evaluation. When the predictions of the percentage of anticipated failures in math and English were released, there was frustration for teachers and additional disinterest by students. There was no ability to retake, and if predictions meant no one could pass, why should students even try?

Digital TestingScantron

Moreover, while the SBAC drove the adoption of digital testing in the state in grades 3-8, most of the pre-test skill development was still given in pen and pencil format. Unless the school district consistently offered a seamless integration of 1:1 technology, there could be question as to what was being assessed-a student’s technical skills or application of background knowledge. Simply put, skills developed with pen and pencils may not translate the same on digital testing platforms.

As a side note, those who use computer labs or develop student schedules will be happy to know that SAT is not a digital test….at least not yet.

US Education Department Approved Request 

According to an early report (2006) by The Brooking’s Institute, the SBAC’s full suite of summative and interim assessments and the Digital Library on formative assessment was first estimated to cost $27.30 per student (grades 3-11). The design of the assessment would made economical if many states shared the same test.

Since that intial report, several states have left the Smarter Balanced Consortium entirely.

In May, the CT legislature voted to halt SBAC in grade ii in favor of the SAT. This switch will increase the cost of testing.According to an article (5/28/15) in the CT Mirror “Debate Swap the SAT for the Smarter Balanced Tests” :

“‘Testing students this year and last cost Connecticut $17 million’, the education department reports. ‘And switching tests will add cost,’ Commissioner of Education Dianna Wentzell said.”

This switch was approved by the U.S. Department of Education for Connecticut schools Thursday, 8/6/15, the CT Department of Education had asked it would not be penalized under the No Child Left Behind Act’s rigid requirements. Currently the switch for the SAT would not change the tests in grades 3-8; SBAC would continue at these grade levels.

Why SBAC at All?

All this begs the question, why was 11th grade selected for the SBAC in the first place? Was the initial cost a factor?

Since the 1990s, the  State of Connecticut had given the Connecticut Achievement Performance Test (CAPT) in grade 10, and even though the results were reported late, there were still two years to remediate students who needed to develop skills. In contrast, the SBAC was given the last quarter of grade 11, leaving less time to address any low level student needs. I mentioned these concerns in an earlier post: The Once Great Junior Year, Ruined by Testing.

Moving the SBAC to junior year increased the amount of testing for those electing to take the SAT with some students taking the ASVAB (Armed Services Vocational Aptitude Battery) or selected to take the NAEP (The National Assessment of Educational Progress).

There have been three years of “trial testing” for the SBAC in CT and there has been limited feedback to teachers and students. In contrast, the results from the SAT have always been available as an assessment to track student progress, with results reported to the school guidance departments.

Before No Child Left Behind, before the Common Core State Standards, before SBAC, the SAT was there. What took so them (legislature, Department of Education, etc) so long?

Every Junior Will Take the NEW SAT

Denver Post: Heller

Denver Post: Heller

In the past, not every student elected to take the SAT test, but many districts did offer the PSAT as an incentive. This coming year, the SAT will be given to every 11th grader in Connecticut.

The big wrinkle in this plan?
The SAT test has been revised (again) and will be new in March 2016.

What should we expect with this test?

My next headline?

OMG. HWGA.

Graphic by Christopher King that accompanied the editorial piece "In Defense of Annual Testing"

Graphic by Christopher King that accompanied the editorial piece “In Defense of Annual Testing”

My Saturday morning coffee was disrupted by the headline in the New York Times opinion piece, In Defense of Annual School Testing  (2/7/15) by Chad Aldeman, an associate partner at Bellwether Education Partners, a nonprofit education research and consulting firm. Agitating me more than the caffeine in the coffee was clicking on Aldeman’s resume. Here was another a policy analyst in education, without any classroom experience, who served as an adviser to the Department of Education from 2011 to 2012. Here was another policy wonk with connections to the testing industry.

In a piece measuring less than 800 words, Aldeman contended that the “idea of less testing” in our nation’s schools, currently considered by liberals and conservative groups alike, “would actually roll back progress for America’s students.”

…annual testing has tremendous value. It lets schools follow students’ progress closely, and it allows for measurement of how much students learn and grow over time, not just where they are in a single moment.

Here is the voice of someone who has not seen students take a standardized test when, yes, they are very much in “that single moment.” That “single moment” looks different for each student. An annual test does not consider the social and emotional baggage of that “single moment” (EX: no dinner the night before; using social media or video game until 1 AM; parent separation or divorce; fight with friend, with mother, with teacher; or general text anxiety). Educators recognize that students are not always operating at optimum levels on test days. No student likes being tested at any “single moment.”

Aldeman’s editorial advocates for annual testing because he claims it prevents the kinds of tests that take a grade average results from a school. Taking a group average from a test, he notes, allows “the high performers frequently [to] mask what’s happening to low achievers.” He prefers the kinds of new tests that focus on groups of students with a level of analysis possible only with year to year measurement. That year to year is measurement on these expensive new tests is, no doubt, preferred by testing companies as a steady source of income.

His opinion piece comes at a time where the anti-test movement is growing and states are looking at the expenses of such tests. There is bipartisan agreement in the anti-test movement that states students are already being assessed enough. There are suggestions that annual testing could be limited to at specific grade levels, such as grades 3, 8, and 11, and that there are already enough assessments built into each student’s school day.

Educators engage in ongoing formative assessments (discussions, polls, homework, graphic organizers, exit slips, etc) used to inform instruction. Interim and summative assessments (quizzes/test) are used continuously to measure student performance. These multiple kinds of assessments provide teachers the feedback to measure student understanding and to differentiate instruction for all levels of students.

For example, when a teacher uses a reading running record assessment, the data collected can help determine what instruction will improve a child’s reading competency. When a teacher analyzes a math problem with a child, the teacher can assess which computational skills need to be developed or reviewed.

Furthermore, there are important measures that cannot be done by a standardized test.  Engaging students in conversations may provide insight into the  social or emotional issues that may be preventing that child’s academic performance.

Of course, the annual tests that Aldeman suggests need to be used to gain information on performance do not take up as much instructor time as the ongoing individual assessments given daily in classrooms. Testing does use manpower efficiently; one hour of testing can yield 30 student hours of results, and a teacher need not be present to administer a standardized test. Testing can diagnose each student strengths and/or weaknesses at that “single moment” in multiple areas at the same time. But testing alone cannot improve instruction, and improving instruction is what improves student performance.

In a perverse twist in logic, the allocation of funds and class time to pay for these annual tests results in a reduction of funds available to finance teachers and the number of instructional hours to improve and deliver the kind of instruction that the tests recommend. Aldeman notes that the Obama administration has invested $360 million in testing, which illustrates their choice in allocating funds to support a testing industry, not schools. The high cost of developing tests and collecting the test data results in stripping funds from state and local education budgets, and limits the financial resources for improving the academic achievement for students, many of those who Aldeman claims have “fallen through the cracks.”

His argument to continue annual testing does not refer to the obscene growth in the industry of testing, 57% in the past three years up to $2.5 billion, according to the Software & Information Industry Association. Testing now consumes the resources of every school district in the nation.

Aldeman concludes that annual testing should not be politicized, and that this time is “exactly the wrong time to accept political solutions leaving too many of our most vulnerable children hidden from view.”

I would counter that our most vulnerable children are not hidden from view by their teachers and their school districts. Sadly their needs cannot be placed “in focus” when the financial resources are reduced or even eliminated in order to fund this national obsession with testing. Aldeman’s defense is indefensible.

time clock americanYes, American teachers do work more hours than their international counterparts, but exactly how much more could be a matter of perception versus reality, and testing may be to blame.

A recent study comparing the number of hours worked by American teachers shows the difference in instructional time is not as significant as has been publicized in the past. Researcher Samuel E. Abrams, director of the National Center for the Study of Privatization in Education at Teachers College, Columbia University, has published his findings in a working paper titled “The Mismeasure of Teaching Time“. His research contradicts claims of American teachers working twice or even 73% more hours than their counterparts in other countries, correcting these claims by grade level to 12% (elementary) 14% (middle/intermediate), and 11% (high school).

The reason for the difference, Abrams suggests, was the the Schools and Staffing Survey (SASS) offered by the Paris-based Organization for Economic Cooperation and Development (OECD) used to collect data on this topic:

The most recent data reported to OECD is from the 2007-08 survey, which was 44 pages long and contained 75 questions.  Teaching time is the 50th question and it asks teachers to round up the number of hours. As a result, responses were often inflated.

In addition to suggesting that the process of answering 50 questions clouded the responses of teachers taking the survey, Abrams contended that the inflated time also came from a misinterpretation of “teaching time” calculated by the OECD as the “net contact time for instruction.” By definition, excluded from net contact time are activities such as professional development days, student examination days, attendance at conferences, and out of school excursions.

In applying the OECD definition of teaching time, Abrams concluded that one contributing factor to the over-estimation by American teachers was the large number of hours spent assessing students.

Using examples from school districts in Massachusetts, Abrams offered a breakdown of the time teachers spend assessing students in grades 2-8:

  • For students in grade two, 48 hours are lost to interim assessments tied to the state exams
  • For students in grades three and six, 48 hours are lost to interim assessments and 16 hours are lost to state exams in ELA and math;
  • For students in grades four and seven, 48 hours are lost to interim assessments and 20 hours are lost to state exams in ELA, ELA composition, and math;
  • For students in grades five and eight, 48 hours are lost to interim assessments and 24 hours are lost to state exams in ELA, math, and science. 

Averaging a student school week at a very generalized 35 hours means that students in Massachusetts grades K-8 could spend approximately 1.5-2 weeks of each school year being assessed. Spreading this time out over the school year may contribute to the perception of a never-ending test season.

The report considered the time American educators spend assessing students at every grade level contributed to the misperception of teaching time. More importantly, the study highlighted the disparity in pedagogical practice between the education systems in United States compared to other countries. Like so many other researchers, Abrams contrasted American schools with Finland’s school system. He noted that the difference in teaching time between the two countries was not as great as originally publicized, but that the difference of practice is the “polar opposite.” In Finland, the structure of the school day has 15 minute breaks between classes or 15 minutes of play for every 45 minutes of instruction, for a total of 75 minutes per day, with no standardized tests. The result is that Finland’s teachers demonstrate little confusion on defining teaching time.

The data provided by Abrams suggests that American teachers do work more than other teachers worldwide. Using Paris-based OECD figures to convert the percentage of time into regular 40 hour weeks means that American elementary teachers work 2.4 weeks (12%); middle/intermediate teachers work 2.75 weeks (14%) and high school teachers work 2.2 weeks (11%) more than other teachers worldwide.

If the demand for assessment is the reason for the difference,  I am confident that most American teachers could think of other things to do during those weeks other than testing.

I am sure their students feel the same way.

It’s official.

The chocolate milk debate  as a test writing prompt is dead in Connecticut to all grade levels.choclate-milk

Yes, that old stalwart, “Should there be chocolate milk in schools?” offered to students as a standardized writing prompt was made null and void with one stroke from Governor Malloy’s pen. According to Hartford Courant, (6/12/14) Malloy Veto Keeps Chocolate Milk On School Lunch Menus,

“to the vast relief of school kids, nutritionists, milk producers and lawmakers, Gov. Dannel P. Malloy used his veto power Thursday to kill a bill that would have banned chocolate milk sales in Connecticut schools.” 

Apparently, the same nutritional charts, editorials, and endorsements from dairy groups organized in packets and given to students from grades 3-11 to teach how to incorporate evidence in a fake persuasive argument under testing conditions was convincing enough to have real CT residents make a persuasive argument for legislators. To show his solidarity with the people, Governor Malloy quaffed down a container of chocolate milk before vetoing a bill that would have banned the sale of chocolate milk in schools.

Standardly, the writing prompt is addressed in English/Language Arts (ELA) class in elementary schools, but in middle and high schools, a persuasive essay is often the responsibility of the social studies teacher. The assumption here is that the skill of persuasion requires research and the incorporation of evidence, both taught in social studies classes. In contrast, ELA classes are dedicated to the analysis of literature through essays using a range of skills: identifying author’s craft, identifying author’s purpose, editing, and revising. The responsibilities for the writing portion of an exam are divided between the ELA classes for the literary analysis essay and the social studies classes for the persuasive essay. This design is intended to promote an interdisciplinary effort, but it is an intellectually dishonest division of labor.

ELA teachers have choices to prepare students for standardized tests using ELA content (literature and grammar) to improve skills. Math and science teachers are also tied to their disciplines’ content in order for their students to be prepared.  Social studies is the only core discipline with the test-prompt disconnect.

So, what topics might test creators design to replace the infamous chocolate milk debate prompt? Before test creators start manufacturing new and silly debates, there is a window of opportunity where attention could be brought to this disconnect between content and testing in writing. Here is the moment where social studies teachers should point out to test creators the topics from their curriculum that could be developed into writing prompts. Here is a foot in the door for the National Council for the Social Studies to introduce writing prompts that complement their content. For example, there could be prompts about Egyptian culture, prompts on the American Revolution, or prompts about trade routes and river based communities. Too often, social studies teachers must devote class time to topics unrelated to curriculum.

The Smarter Balanced Assessment Field Test given this past spring (2014) to 11th graders was about the use of social media by journalists. When they took the test, I overheard the following exchange:

“Of course they use social media,” grumbled one student, “who is going to stop them?”
“Do they think they are ‘cool’ because they mentioned Twitter?” countered another.

Previous standardized test writing prompts (in Connecticut, the CMT and CAPT) for high school and middle school have been devoted to asking students to write persuasively on the age students should be able to drive; whether wolves should be allowed in Yellowstone National Park or not; whether to permit the random drug testing of high school students; and whether there should be uniforms required in schools.

Please notice that none of these aforementioned prompts are directly related to the content in any social studies curricula. Furthermore, the sources prepared as a database for students to use as evidence in responding to these are packets with newspaper opinion columns or polls, and statistical charts; there is no serious research required.

Here is the moment when social studies teachers and curriculum leaders need to point out how academically dishonest the writing prompt is on a standardized test as a measure of their instruction in their discipline. No longer should the content of social studies be abandoned for inauthentic debate.

The glass in Connecticut is half-full now that students can have chocolate milk in schools. Time for test creators to empty out the silly writing prompts that have maddened social studies teachers for years.

Time to choose content over chocolate.

 

Screen Shot 2014-04-06 at 11.16.51 AMNot so long ago, 11th grade was a great year of high school. The pre-adolescent fog had lifted, and the label of “sophomore,” literally “wise-fool,” gave way to the less insulting “junior.” Academic challenges and social opportunities for 16 and 17 years olds increased as students sought driver’s permits/licenses, employment or internships in an area of interest. Students in this stage of late adolescence could express interest in their future plans, be it school or work.

Yet, the downside to junior year had always been college entrance exams, and so, junior year had typically been spent in preparation for the SAT or ACT. When to take these exams had always been up to the student who paid a base price $51/SAT or $36.50/ACT for the privilege of spending hours testing in a supervised room and weeks in anguish waiting for the results. Because a college accepts the best score, some students could choose to take the test many times as scores generally improve with repetition.

Beginning in 2015, however, junior students must prepare for another exam in order to measure their learning using the Common Core State Standards (CCSS). The two federally funded testing consortiums, Smarter Balanced Assessments (SBAC) or the Partnership for Assessment of Readiness for College and Careers (PARCC) have selected 11th grade to determine the how college and career ready a student is in English/Language Arts and Math.

The result of this choice is that 11th grade students will be taking the traditional college entrance exam (SAT or ACT) on their own as an indicator of their college preparedness. In addition, they will take another state-mandated exam, either the SBAC or the PARRC, that also measures their college and career readiness. While the SAT or ACT is voluntary, the SBAC or PARRC will be administered during the school day, using 8.5 hours of instructional time.

Adding to these series of tests lined up for junior year are the Advanced Placement exams. There are many 11th grade students who opt to take Advanced Placement courses in a variety of disciplines either to gain college credit for a course or to indicate to college application officers an academic interest in college level material. These exams are also administered during the school day during the first weeks of May, each taking 4 hours to complete.

One more possible test to add to this list might be the Armed Services Vocational Aptitude Battery (ASVAB test) which, according to the website Today’s Military,  is given to more than half of all high schools nationwide to students in grade 10th, 11th or 12th, although 10th graders cannot use their scores for enlistment eligibility.

The end result is that junior year has gradually become the year of testing, especially from the months of March through June, and all this testing is cutting into valuable instructional time. When students enter 11th grade, they have completed many pre-requisites for more advanced academic classes, and they can tailor their academic program with electives, should electives be offered. For example, a student’s success with required courses in math and science can inform his or her choices in economics, accounting, pre-calculus, Algebra II, chemistry, physics, or Anatomy and Physiology. Junior year has traditionally been a student’s greatest opportunity to improve a GPA before making college applications, so time spent learning is valuable. In contrast, time spent in mandated testing robs each student of classroom instruction time in content areas.

In taking academic time to schedule exams, schools can select their exam (2 concurrent) weeks for performance and non-performance task testing.  The twelve week period (excluding blackout dates) from March through June is the nationwide current target for the SBAC exams, and schools that choose an “early window” (March-April) will lose instructional time before the Advanced Placement exams which are given in May. Mixed (grades 11th & 12th) Advanced Placement classes will be impacted during scheduled SBACs as well because teachers can only review past materials instead of progressing with new topics in a content area. Given these circumstances, what district would ever choose an early testing window?  Most schools should opt for the “later window” (May) in order to allow 11th grade AP students to take the college credit exam before having to take (another) exam that determines their college and career readiness. Ironically, the barrage of tests that juniors must now complete to determine their “college and career readiness” is leaving them with less and less academic time to become college and career ready.

Perhaps the only fun remaining for 11th graders is the tradition of the junior prom. Except proms are usually held between late April and early June, when -you guessed it- there could be testing.

March Madness is not exclusive to basketball.Screen Shot 2014-03-15 at 1.38.50 PM
March Madness signals the season for standardized testing season here in Connecticut.
March Madness signals the tip-off for testing in 23 other states as well.

All CT school districts were offered the opportunity to choose the soon-to-be-phased-out pen and paper grades 3-8 Connecticut Mastery Tests (CMT)/ grade 10 Connecticut Academic Performance Test (CAPT) OR to choose the new set of computer adaptive Smarter Balanced Tests developed by the federally funded Smarter Balanced Assessment Consortium (SBAC). Regardless of choice, testing would begin in March 2014,

As an incentive, the SBAC offered the 2014 field test as a “practice only”, a means to develop/calibrate future tests to be given in 2015, when the results will be recorded and shared with students and educators. Districts weighed their choices based on technology requirements, and many chose the SBAC field test. But for high school juniors who had completed the pen and paper CAPT in 2013, this is practice; they will receive no feedback. This 2014 SBAC field test will not count.

Unfortunately, the same can not be said for counting the 8.5 hours of testing in English/language arts and mathematics that had to be taken from 2014 academic classes. The elimination of 510 minutes of instructional time is complicated by scheduling students into computer labs with hardware that meets testing  specifications. For example, rotating students alphabetically through these labs means that academic classes scheduled during the testing windows may see students A-L one day, students M-Z on another. Additional complications arise for mixed grade classrooms or schools with block schedules. Teachers must be prepared with partial lessons or repeating lessons during the two week testing period; some teachers may miss seeing students for extended periods of time. Scheduling madness.

For years, the state standardized test was given to grade 10, sophomore students. In Connecticut, the results were never timely enough to deliver instruction to address areas of weakness during 10th grade, but they did help inform general areas of weakness in curriculum in mathematics, English/language arts, and science. Students who had not passed the CAPT had two more years to pass this graduation requirement; two more years of education were available to address specific student weaknesses.

In contrast, the SBAC is designed to given to 11th graders, the junior class. Never mind that these junior year students are preparing to sit for the SAT or ACT, national standardized tests. Never mind that many of these same juniors have opted to take Advanced Placement courses with testing dates scheduled for the first two full weeks of May. On Twitter feeds, AP teachers from New England to the mid-Atlantic are already complaining about the number of delays and school days already lost to winter weather (for us 5) and the scheduled week of spring break (for us, the third week of April) that comes right before testing for these AP college credit exams. There is content to be covered, and teachers are voicing concerns about losing classroom seat time. Madness.

Preparing students to be college and career ready through the elimination of instructional time teachers use to prepare students for college required standardized testing (SAT, ACT) is puzzling, but the taking of instructional time so students can take state mandated standardized tests that claim to measure preparedness for college and career is an exercise in circular logic. Junior students are experiencing an educational Catch 22, they are practicing for a test they will never take, a field test that does not count. More madness.

In addition, juniors who failed the CT CAPT in grade 10 will still practice with the field test in 2014. Their CAPT graduation requirement, however, cannot be met with this test, and they must still take an alternative assessment to meet district standards. Furthermore, from 2015 on, students who do not pass SBAC will not have two years to meet a state graduation requirement; their window to meet the graduation standard is limited to their senior year. Even more madness.

Now, on the eve of the inaugural testing season, a tweet from SBAC itself (3/14):

Screen Shot 2014-03-15 at 1.28.22 PM

This tweet was followed by word from CT Department of Education Commissioner Stefan Pryor’s office sent out on to superintendents from Dianna Roberge-Wentzell, DRW, that the state test will be delayed a week:

Schools that anticipated administering the Field Test during the first week of testing window 1 (March 18 – March 24) will need to adjust their schedule. It is possible that these schools might be able to reschedule the testing days to fall within the remainder of the first testing window or extend testing into the first week of window 2 (April 7 – April 11).

Education Week blogger Stephen Sawchuk provides more details in his post  Smarter Balanced Group Delays in the explanation for the delay:

The delay isn’t about the test’s content, officials said: It’s about ensuring that all the important elements, including the software and accessibility features (such as read-aloud assistance for certain students with disabilities) are working together seamlessly.

“There’s a huge amount of quality checking you want to do to make sure that things go well, and that when students sit down, the test is ready for them, and if they have any special supports, that they’re loaded in and ready to go,” Jacqueline King, a spokeswoman for Smarter Balanced, said in a March 14 interview. “We’re well on our way through that, but we decided yesterday that we needed a few more days to make sure we had absolutely done all that we could before students start to take the field tests.”

A few more days is what teachers who carefully planned alternative lesson plans during the first week of the field test probably want in order to revise their lessons. The notice that districts “might be able to reschedule” in the CT memo is not helpful for a smooth delivery of curriculum, especially since school schedules are not developed empty time slots available to accommodate “willy-nilly testing” windows. There are field trips, author visits, assemblies that are scattered throughout the year, sometimes organized years in advance. Cancellation of activities can be at best disappointing, at worst costly. Increasing madness.

Added to all this madness, is a growing “opt-out” movement for the field test. District administrators are trying to address this concern from the parents on one front and the growing concerns of educators who are wrestling with an increasingly fluid schedule. According to Sarah Darer Littman on her blog Connecticut News Junkie, the Bethel school district offered the following in a letter parents of Bethel High School students received in February:

“Unless we are able to field test students, we will not know what assessment items and performance tasks work well and what must be changed in the future development of the test . . . Therefore, every child’s participation is critical.

For actively participating in both portions of the field test (mathematics/English language arts), students will receive 10 hours of community service and they will be eligible for exemption from their final exam in English and/or Math if they receive a B average (83) or higher in that class during Semester Two.”

Field testing as community service? Madness. Littman goes on to point out that research shows that a student’s GPA is a better indicator of college success than an SAT score and suggests an exemption raises questions about a district’s value on standardized testing over student GPA, their own internal measurement. That statement may cause even more madness, of an entirely different sort.

Connecticut is not the only state to be impacted by the delay. SBAC states include: California, Delaware,  Hawaii, Idaho, Iowa, Maine, Michigan, Missouri, Montana, Nevada, New Hampshire, North Carolina, North Dakota, Oregon, Pennsylvania, South Carolina, South Dakota, U.S. Virgin Islands, Vermont, Washington, West Virginia, Wisconsin, Wyoming.

In the past, Connecticut has been called “The Land of Steady Habits,” “The Constitution State,” “The Nutmeg State.” With SBAC, we could claim that we are now a “A State of Madness,” except for the 23 other states that might want the same moniker. Maybe we should compete for the title? A kind of Education Bracketology just in time for March Madness.

ScantronThe New York State Department of Education’s new standardized tests were administered last week. The tests for grades 3-8 were developed by the educational testing company Pearson and contained new “authentic” passages aligned to the new Common Core State Standards. State tests might have been routine news had not several teachers also noticed that the English Language Arts “authentic” passages mentioned products and trademark names including Mug ©Root Beer and Lego ©.

Product placement on standardized tests in elementary schools is bigger news. The public has grown accustomed to advertisements on webpages, before videos, on scoreboards, and with the well-placed beverage during a movie. Subtle and direct advertising to the youth market to develop brand loyalty at an early age is the goal of almost every corporation.

Consider a survey by Piper Jaffray, a leading investment bank and asset management firm, the  “Taking Stock With Teens” survey (taken March 1–April 3, 2013), that gathered input from approximately 5,200 teens (average age of 16.3 years). The survey is used to determine trends, and the most recent results note:

“Spending has moderated across discretionary categories for both upper-income and average-income teens when compared to the prior year and prior season. Yet nearly two-thirds of respondents view the economy as consistent to improving, and just over half signaled an intent to spend ‘more’ on key categories of interest, particularly fashion and status brand merchandise.”

Much attention, therefore, is placed on the youth market, and product placement on standardized testing could be a new marketing strategy. For example, corporations in the fashion industry could read this report and be inclined to offer some news stories or commission a short story that mentioned clothing brand names in the future to Pearson or another testing company in order to provide “authentic” passages. What better opportunity for corporations to build brand loyalty then to an audience, captive in a classroom during a state-mandated test?

The education reporter for the Washington Post, Valerie Strauss, reported on the “authentic” passages that mentioned products as “author’s choices”; Pearson’s response to her query:

As part of our partnership with NYSED, Pearson searches for previously published passages that will support grade-level appropriate items for use in the 3-8 ELA assessments. The passages must meet certain criteria agreed upon by both NYSED and Pearson in order to best align to Common Core State Standards and be robust enough to support the development of items. Once passages are approved, Pearson follows legal protocols to procure the rights to use the published passages on the assessment on behalf of NYSED. If a fee is required to obtain permission, Pearson pays this fee. NYSED has ultimate approval of passages used on the assessment.

Strauss’s report, “New Standardized Tests Feature Plugs for Commercial Products” also indicated that this practice is not exclusive to NY, and that “several different assessment programs have instances of brand names included due to use of authentic texts.” There were no specifics mentioned.

Following up with the NY Department of Education, Beth Fertig from the blog Schoolbook (WNYC),  Stories from the Front Line of Testing asked about the recent product placement:

“This is the first time we have had 100 percent authentic texts on the assessments,” said spokesman Tom Dunn. “They were selected as appropriate to measure the ELA standards. Any brand names that occurred in them were incidental and were cited according to publishing conventions. No one was paid for product placements.”

Perhaps no one was paid this year, but an unwritten taboo was broken with these standardized test. The New York Post reported one teacher response in the article  “Learn ABC’s – & IBM’s: Products in Kid Exams” by Yoav Gonen and Georgett Roberts

“I’ve been giving this test for eight years and have never seen the test drop trademarked names in passages — let alone note the trademark at the bottom of the page,” said one teacher who administered the exam.

They also reported that other commercial enterprises including the TV show “Teen Titans” and the international soccer brand FIFA  were also included on the tests.

While gaining the loyalty of the youth market is a necessary step for major corporations, the appearance of these brands on standardized tests brings our students one step closer to the future as envisioned by Stephen Spielberg in the film Minority Report. In one scene, the fugitive John Anderton (Tom Cruise) walks along a corridor while animated billboards market directly to him by calling his name:

The possibility of this kind of marketing exists and perhaps personalized advertising will call to us everyday; a cacophony of advertisements designed to keep brand names in our consciousness. Similarly, even the youngest students are the target of marketing campaigns as part of any corporation’s long term economic strategy; advertisements on multiple platforms are the “white noise” of their lives. So frequent are advertisements in students’ lives that any product placement, paid or unpaid, on these standardized tests may contribute to the definition of what is “authentic”. Students are exposed to ads so frequently and in so many genres that a text is not real without some brand name mentioned.

And if that product placement is a small part of what makes a passage “authentic” on a standardized test, can talking “authentic” billboards in the school hallways be far behind?

Three years ago, I was a part of a team of teachers and several administrators, including our current superintendent of schools, who attended the Florida Educational Technology Conference Screen Shot 2013-02-02 at 5.59.23 PM(FETC) as professional development to meet the coming demands for the 21st Century skills of communication, collaboration, critical thinking and creativity. Our rural Regional School District #6 is small (under 1000 students total) tucked away in the pastoral splendor of the Northwest Corner of Connecticut. The regional high school (Wamogo Middle/High School) is a vocational agricultural school that brings in one-third of the population from surrounding communities. We have a cow, pigs, lambs, and fish on the high school campus at any given time of the year. Despite our rustic roots, we had a committed technology team that was willing to support early adopters of technology in the classroom.

When we attended this FETC in 2010, we were overwhelmed with the amount of educational technology that was competing for our attention; the exhibit floor was awash in hardware and software. We came home laden with flyers, booklets, and pamphlets. We took notes. We followed up links and websites. The experience was mind-boggling and exhausting.

This January (2013), several of us returned to FETC. The exhibit floor was still awash with hardware and software, but we were far more savvy. That is because in three short years our district invested in the necessary hardware and training for 21st Century educational skills. There are Smartboards in every classroom, a netbook 1:1 initiative in the elementary and middle schools, and iPads for faculty and staff. The high school is in its first year of a “bring your own digital device” policy. For two years now, we have had an EDCamp style professional development for our faculty and staff (K-12) to share what we have learned individually and collectively.

Consequently, during this FETC conference we were already familiar with the technologies featured in many of the sessions, and we could add to our knowledge base without feeling completely overwhelmed. In three years we learned the basics for wikis, blogs, podcast, vodcasts, screencasts, and websites. So, when we attended this FETC, we were prepared for the presentations and concurrent sessions that featured platforms we use daily such as Livebinders, Edmodo, WordPress, and Google apps. We were reassured that the open source software platforms we chose to use three years ago are still major players in education. We learned new ways to use technologies to help us assess, organize, and deliver content.

We attended keynotes that discussed the future of education:

  • Google Global Education Evangelist Jaime Casap spoke on “Unleashing the Power of the Web in Education”. His presentation focused on the power of collaboration and the rapidly changing way our students access and use information. “Your Smartphone?” he predicted with a laugh, “one day will be in a thrift store, purchased by some hipster as a nostalgic decorative touch.” The standardized test did not have a place in his vision of education.
  • Educational Consultant & Author, Dr. David Sousa, (How the Brain Learns, How the Brain Learns to Read, How the Brain Influences Behavior, and Brainwork: The Neuroscience of How We Lead Others), gave an address titled “Designing Brain Friendly Schools in the Age of Accountability”. His talk emphasized the importance of physical movement in learning, the needs for sleep for healthy cognitive processing, while dismissing the notion that anyone can “multi-task” effectively. “Multi-tasking three or four things means doing three or four things poorly,” he admonished those in the tech-connected audience who raised their hands as multi-taskers. He dismissed the standardized test as unnecessary.
  • Executive Director, Institute of Play, Katie Salen (Professor in the School of Computing and Digital Media at DePaul University) spoke on “Connected Learning: Activating Games, Design and Play”. This keynote offered video from students engaged in designing and playing games in different content areas. She explained that games allow students to “learn how to fail up” using immediate feedback and experience to reengage in a game. She dismissed standardized tests as “unimportant and that’s ok.”

While each keynote speaker addressed the role of technology in education differently, none of them saw the standardized test as a means to access what students were doing. There was no standardized tests in their visions of education. They rejected the idea of standardization entirely, speaking instead of collaboration and individual exploration. In contrast to the speeches, however, the exhibit floor was filled with software and hardware from the giants of the standardized testing industry: McGraw-Hill, Pearson, and Global Scholar. The juxtaposition of what was being said in the keynote speeches about standardized testing with the marketing of materials by testing companies on the exhibit floor illustrates a huge conflict in the use of technology in education today: How will our schools systems be measured in this age of information? What will be important for our students to know? How will we measure these skills? The economic implications for testing companies cannot be ignored; they want a place at the local, state, and federal table where the education budget is being discussed.

Of course, our small district does not have the solutions to these questions, but what we do have is a sense of confidence in the tools of education technology. The attendees at this year’s FETC conference are confident that our school district is on the right track in providing an education with an emphasis on the 21st Century skills. We will be collaborating with our fellow faculty members, communicating what we learned, critically thinking about how to use technology in our classrooms in order to enhance our students creativity.

While we were attending, we met members of a neighboring school district who were attending FETC for the first time. We recognized the glassy-eyed look of a first visit; they claimed to be “overwhelmed.” They also told us that they were attending because, “we saw what you all had done. We are here because of you!”

In three years, the teachers in Regional School District # 6  have achieved competence and confidence in the use of technology because of our administration, our regional Board of Education, and the Superintendent’s commitment to the future of education. As one science teacher tweeted during a session he was attending, “Don’t mean to brag, but I’m lighting this social media seminar up. Props to Region 6 for giving me the freedom to communicate.”

hairy handSince many college applications are due between January 1 and February 1, I know that many of my students are fretting about their SAT scores. I wish I could tell them to relax, that the score is just a score, and that they will never have to hear the words SAT again, but that would not be telling them the truth. The hairy hand of the SAT can reach far forward into their future. An SAT score is a brand, locking academic potential in a data point where we are forever 17 years old.

When I took the test, it was known as the Scholastic Aptitude Test, and that was before it became known as the Scholastic Assessment Test. At that time, the top score was a 1600, and there was no writing section. There were no pre-tutoring sessions from pricey tutors available after school or on Saturdays to practice for the SAT. I think I glanced through a practice book.

That Saturday morning, I was dropped off by my father in our 68 VW van along with hundreds of equally bleary-eyed seniors. I think I paid that day because I waited for him to write out a check. About two hours later in the middle of the math section, I remember thinking “Whoa…maybe I should have studied for this.” I had approached this milestone in my life with a little too much confidence and too little breakfast. I came out of that ordeal exhausted and starved.

Some 38 years later, I am still reminded about the results from that day. For example, on applications to graduate school, there is always a question on my score on the SATs taken back in 1974.

“Really?” I think to myself, “I am so much better a student today. I have two graduate degrees, and I am gainfully employed in the field of education. I am a very differently educated person from my 17 year old self. Then I was financially strapped, working part-time in a pizza restaurant, and I had yet to attend my first rock concert. Yet, you still want to know what my high school SAT score was?”

While I am not ashamed of my score, I am not posting it, either. Fortunately, because of my SAT score, I have been able to waive out of other standardized tests, for example, the Praxis I in Connecticut which requires a combined minimum score of 1000. You can be content to know I met this minimum standard with several hundred points to spare. I did very well on the verbal, but in retrospect, I probably could have done better had I prepared for the math section a little more.

So when I come to that question on an application, I think how that score taken when I was 17 one cold spring morning cannot accurately reflect who I am today. Nor do I think that an SAT score accurately reflects who my students are either. At this time of year, I hear them discuss numbers as they explain why they may or may not, or did or did not, get into a college of their choice. Sometimes I am surprised to hear particularly high or low scores, however, this information never changes my opinion of the student I have seated in my class. A student with a particularly high SAT score may never turn a paper in on time and have a failing grade while a student with a low SAT score may have an “A” in my class because every assignment is done on time or revised when recommended. The SATs may be an “indicator”, but these are students, not numbers. The score on an SAT can still fall subject to human error.

I do not think at age 17 that I fully understood how far forward into my future the hairy hand of the SAT would travel. I doubt my students understand, but I hope they know that their future will not depend on their 17 year old academic selves.

I suppose I should be grateful that when I am asked for my SAT score, that there is not also a request for   additional identification, say, a picture of me in that decade. That thought is chilling. The hiphuggers, bell bottoms, velvet jackets, and ubiquitous leotards of my high school decade are positively comical.My yellow chiffon prom dress is particularly hilarious. On the whole, I’d rather they see my SAT score.