Archives For standardized tests

Graphic by Christopher King that accompanied the editorial piece "In Defense of Annual Testing"

Graphic by Christopher King that accompanied the editorial piece “In Defense of Annual Testing”

My Saturday morning coffee was disrupted by the headline in the New York Times opinion piece, In Defense of Annual School Testing  (2/7/15) by Chad Aldeman, an associate partner at Bellwether Education Partners, a nonprofit education research and consulting firm. Agitating me more than the caffeine in the coffee was clicking on Aldeman’s resume. Here was another a policy analyst in education, without any classroom experience, who served as an adviser to the Department of Education from 2011 to 2012. Here was another policy wonk with connections to the testing industry.

In a piece measuring less than 800 words, Aldeman contended that the “idea of less testing” in our nation’s schools, currently considered by liberals and conservative groups alike, “would actually roll back progress for America’s students.”

…annual testing has tremendous value. It lets schools follow students’ progress closely, and it allows for measurement of how much students learn and grow over time, not just where they are in a single moment.

Here is the voice of someone who has not seen students take a standardized test when, yes, they are very much in “that single moment.” That “single moment” looks different for each student. An annual test does not consider the social and emotional baggage of that “single moment” (EX: no dinner the night before; using social media or video game until 1 AM; parent separation or divorce; fight with friend, with mother, with teacher; or general text anxiety). Educators recognize that students are not always operating at optimum levels on test days. No student likes being tested at any “single moment.”

Aldeman’s editorial advocates for annual testing because he claims it prevents the kinds of tests that take a grade average results from a school. Taking a group average from a test, he notes, allows “the high performers frequently [to] mask what’s happening to low achievers.” He prefers the kinds of new tests that focus on groups of students with a level of analysis possible only with year to year measurement. That year to year is measurement on these expensive new tests is, no doubt, preferred by testing companies as a steady source of income.

His opinion piece comes at a time where the anti-test movement is growing and states are looking at the expenses of such tests. There is bipartisan agreement in the anti-test movement that states students are already being assessed enough. There are suggestions that annual testing could be limited to at specific grade levels, such as grades 3, 8, and 11, and that there are already enough assessments built into each student’s school day.

Educators engage in ongoing formative assessments (discussions, polls, homework, graphic organizers, exit slips, etc) used to inform instruction. Interim and summative assessments (quizzes/test) are used continuously to measure student performance. These multiple kinds of assessments provide teachers the feedback to measure student understanding and to differentiate instruction for all levels of students.

For example, when a teacher uses a reading running record assessment, the data collected can help determine what instruction will improve a child’s reading competency. When a teacher analyzes a math problem with a child, the teacher can assess which computational skills need to be developed or reviewed.

Furthermore, there are important measures that cannot be done by a standardized test.  Engaging students in conversations may provide insight into the  social or emotional issues that may be preventing that child’s academic performance.

Of course, the annual tests that Aldeman suggests need to be used to gain information on performance do not take up as much instructor time as the ongoing individual assessments given daily in classrooms. Testing does use manpower efficiently; one hour of testing can yield 30 student hours of results, and a teacher need not be present to administer a standardized test. Testing can diagnose each student strengths and/or weaknesses at that “single moment” in multiple areas at the same time. But testing alone cannot improve instruction, and improving instruction is what improves student performance.

In a perverse twist in logic, the allocation of funds and class time to pay for these annual tests results in a reduction of funds available to finance teachers and the number of instructional hours to improve and deliver the kind of instruction that the tests recommend. Aldeman notes that the Obama administration has invested $360 million in testing, which illustrates their choice in allocating funds to support a testing industry, not schools. The high cost of developing tests and collecting the test data results in stripping funds from state and local education budgets, and limits the financial resources for improving the academic achievement for students, many of those who Aldeman claims have “fallen through the cracks.”

His argument to continue annual testing does not refer to the obscene growth in the industry of testing, 57% in the past three years up to $2.5 billion, according to the Software & Information Industry Association. Testing now consumes the resources of every school district in the nation.

Aldeman concludes that annual testing should not be politicized, and that this time is “exactly the wrong time to accept political solutions leaving too many of our most vulnerable children hidden from view.”

I would counter that our most vulnerable children are not hidden from view by their teachers and their school districts. Sadly their needs cannot be placed “in focus” when the financial resources are reduced or even eliminated in order to fund this national obsession with testing. Aldeman’s defense is indefensible.

An interesting graphic came across my screen this week. The purpose was to call attention to the hours spent testing elementary students by comparing them to the tests for college or graduate school:

Screen Shot 2013-10-29 at 8.28.06 PM

Standardized testing is not new to schools in the State of Connecticut. Many schools will be using the Smarter Balance Assessment (SBAC) this year (pilot) for state testing. The new testing schedule will be the same as the NY State tests. The SBAC website provides testing times:

Screen Shot 2013-10-30 at 8.26.11 PM

Both charts illustrate the number of hours that elementary, middle, and high school students will sit in order to take tests to measure their achievement in meeting the Common Core State Standards (CCSS). The SBAC tests will be given over a period of week(s), and scheduling may depend on the number of available computers that meet the testing software criteria.

Each sitting will match the minimum amount of time an older student sits for college and law school entrance exams. While these entrance exams (SAT, LSAT, and MCATs) are taken only once, the SBACS are taken annually in grades 3-8 and again in grade 11. Consider that an average student’s experience taking the SAT is a little under four hours, while a student will take the SBAC repeatedly for a total of 52 hours over the course of one academic career. Yet, the hours spent taking a test are not the only hours committed.

Washington Post education reporter, Valerie Strauss, cited a study by the American Federation of Teachers in her July 25, 2013, article How much time do school districts spend on standardized testing? This much.”  The report compared “two unnamed medium-sized school districts — one in the Midwest and one in the East” and determined that:

The grade-by-grade analysis of time and money invested in standardized testing found that test prep and testing absorbed 19 full school days in one district and a month and a half in the other in heavily tested grades.

The percentage of time for SBAC testing is roughly .07% of the school year (based on an average of 1100 school hours/year), but when when test preparation is added, (ex:19 days), that percentage jumps to 11%. This jump is enough to make the time for test preparation equivalent to a year of physical education classes. Ironically, research is proving that physical education may be the best kind of test preparation.

An article by Dr. Catherine L. Davis and Dr. Norman K. Pollock  detailed some of the more recent studies on the relationship between physical education and cognition, noting that “benefits have been detected with 20 minutes per day of vigorous physical activity”.

Their paper, Does Physical Activity Enhance Cognition and Academic Achievement in Children? determined that, “incorporating 40 minutes per day of vigorous activity to attain greater cognitive benefits would require additional programs available to children of all skill levels.” They concluded that:

In a period when greater emphasis is being placed on preparing children to take standardized tests, these studies should give school administrators reasons to consider investing in quality physical education and vigorous activity programs, even at the expense of time spent in the classroom. Time devoted to physical activity at school does not harm academic performance and may actually improve it.

Schools are motivated to try different strategies in order to improve test scores. The data from standardized tests are used to determine the effectiveness of curriculum as well as individual student performance. Standardized test scores are also an increasing metric in teacher evaluations. In the State of Connecticut, test scores could count as much as 40% in a teacher’s performance review, with the spotlight on those educators who teach in testing grades 3-8 and grade 11.

Paradoxically, the focus on standardized testing as an evaluation tool is a contributing factor to the increasing commitment of time and resources to test preparation. Next generation tests like the SBACs will be taken on computers that will require school systems to invest in computer hardware that meets specific criteria. The cost of the hardware and practice software could be justified by increasing the number of students who will take the tests.

Additionally, those who fund education want tests that run on this hardware to be an effective measure of student achievement, and these tests must be of a substantive duration to make the expense worthwhile. Given the commitment of time and money, students will continue to sit for tests and test preparation, perhaps for even longer periods in the future.

What might students be thinking about sitting for all these standardized tests?

They might borrow the words of their favorite author, Dr. Seuss, “And we did not like it. Not one little bit.”

ScantronThe New York State Department of Education’s new standardized tests were administered last week. The tests for grades 3-8 were developed by the educational testing company Pearson and contained new “authentic” passages aligned to the new Common Core State Standards. State tests might have been routine news had not several teachers also noticed that the English Language Arts “authentic” passages mentioned products and trademark names including Mug ©Root Beer and Lego ©.

Product placement on standardized tests in elementary schools is bigger news. The public has grown accustomed to advertisements on webpages, before videos, on scoreboards, and with the well-placed beverage during a movie. Subtle and direct advertising to the youth market to develop brand loyalty at an early age is the goal of almost every corporation.

Consider a survey by Piper Jaffray, a leading investment bank and asset management firm, the  “Taking Stock With Teens” survey (taken March 1–April 3, 2013), that gathered input from approximately 5,200 teens (average age of 16.3 years). The survey is used to determine trends, and the most recent results note:

“Spending has moderated across discretionary categories for both upper-income and average-income teens when compared to the prior year and prior season. Yet nearly two-thirds of respondents view the economy as consistent to improving, and just over half signaled an intent to spend ‘more’ on key categories of interest, particularly fashion and status brand merchandise.”

Much attention, therefore, is placed on the youth market, and product placement on standardized testing could be a new marketing strategy. For example, corporations in the fashion industry could read this report and be inclined to offer some news stories or commission a short story that mentioned clothing brand names in the future to Pearson or another testing company in order to provide “authentic” passages. What better opportunity for corporations to build brand loyalty then to an audience, captive in a classroom during a state-mandated test?

The education reporter for the Washington Post, Valerie Strauss, reported on the “authentic” passages that mentioned products as “author’s choices”; Pearson’s response to her query:

As part of our partnership with NYSED, Pearson searches for previously published passages that will support grade-level appropriate items for use in the 3-8 ELA assessments. The passages must meet certain criteria agreed upon by both NYSED and Pearson in order to best align to Common Core State Standards and be robust enough to support the development of items. Once passages are approved, Pearson follows legal protocols to procure the rights to use the published passages on the assessment on behalf of NYSED. If a fee is required to obtain permission, Pearson pays this fee. NYSED has ultimate approval of passages used on the assessment.

Strauss’s report, “New Standardized Tests Feature Plugs for Commercial Products” also indicated that this practice is not exclusive to NY, and that “several different assessment programs have instances of brand names included due to use of authentic texts.” There were no specifics mentioned.

Following up with the NY Department of Education, Beth Fertig from the blog Schoolbook (WNYC),  Stories from the Front Line of Testing asked about the recent product placement:

“This is the first time we have had 100 percent authentic texts on the assessments,” said spokesman Tom Dunn. “They were selected as appropriate to measure the ELA standards. Any brand names that occurred in them were incidental and were cited according to publishing conventions. No one was paid for product placements.”

Perhaps no one was paid this year, but an unwritten taboo was broken with these standardized test. The New York Post reported one teacher response in the article  “Learn ABC’s – & IBM’s: Products in Kid Exams” by Yoav Gonen and Georgett Roberts

“I’ve been giving this test for eight years and have never seen the test drop trademarked names in passages — let alone note the trademark at the bottom of the page,” said one teacher who administered the exam.

They also reported that other commercial enterprises including the TV show “Teen Titans” and the international soccer brand FIFA  were also included on the tests.

While gaining the loyalty of the youth market is a necessary step for major corporations, the appearance of these brands on standardized tests brings our students one step closer to the future as envisioned by Stephen Spielberg in the film Minority Report. In one scene, the fugitive John Anderton (Tom Cruise) walks along a corridor while animated billboards market directly to him by calling his name:

The possibility of this kind of marketing exists and perhaps personalized advertising will call to us everyday; a cacophony of advertisements designed to keep brand names in our consciousness. Similarly, even the youngest students are the target of marketing campaigns as part of any corporation’s long term economic strategy; advertisements on multiple platforms are the “white noise” of their lives. So frequent are advertisements in students’ lives that any product placement, paid or unpaid, on these standardized tests may contribute to the definition of what is “authentic”. Students are exposed to ads so frequently and in so many genres that a text is not real without some brand name mentioned.

And if that product placement is a small part of what makes a passage “authentic” on a standardized test, can talking “authentic” billboards in the school hallways be far behind?

Screen Shot 2013-03-10 at 11.08.07 AMMarch in Connecticut brings two unpleasant realities: high winds and the state standardized tests. Specifically, the Connecticut Academic Performance Tests (CAPT) given to Grade 10th are in the subjects of math, social studies, sciences and English.

There are two tests in the English section of the CAPT to demonstrate student proficiency in reading. In one, students are given a published story of 2,000-3,000 words in length at a 10th-grade reading level. They have 70 minutes to read the story and draft four essay responses.

What is being tested is the student’s ability to comprehend, analyze, synthesize, and evaluate. While these goals are properly aligned to Bloom’s taxonomy, the entire enterprise smacks of intellectual dishonesty when “Response to Literature” is the title of this section of the test.

Literature is defined online as:

“imaginative or creative writing, especially of recognized artistic value: or writings in prose or verse; especially writings having excellence of form or expression and expressing ideas of permanent or universal interest.”

What the students read on the test is not literature. What they read is a story.

A story is defined as:

“an account of imaginary or real people and events told for entertainment.”

While the distinction may seem small at first, the students have a very difficult time responding to the last of the four questions asked in the test:

How successful was the author in creating a good piece of literature? Use examples from the story to explain your thinking.

The problem is that the students want to be honest.

When we practice writing responses to this question, we use the released test materials from previous years: “Amanda and the Wounded Birds”, “A Hundred Bucks of Happy”, “Machine Runner” or “Playing for Berlinsky”.  When the students write their responses, they are able to write they understood the story and that they can make a connection. However, many students complain the story they just read is not “good” literature.

I should be proud that the students recognize the difference. In Grades 9 & 10, they are fed a steady diet of great literature: The Odyssey, Of Mice and Men, Romeo and Juliet, All Quiet on the Western Front, Animal Farm, Oliver Twist. The students develop an understanding of characterization. They are able to tease out complex themes and identify “author’s craft”. We read the short stories “The Interlopers” by Saki, “The Sniper” by Liam O´Flaherty, or “All of Summer in a Day” by Ray Bradbury. We practice the CAPT good literature question with these works of literature. The students generally score well.

But when the students are asked to do the same for a CAPT story like the 2011 story “The Dog Formerly Known as Victor Maximilian Bonaparte Lincoln Rothbaum”, they are uncomfortable trying to find the same rich elements that make literature good. A few students will be brave enough to take on the question with statements such as:

  • “Because these characters are nothing like Lenny and George in Of Mice and Men…”
  • “I am unable to find one iota of author’s craft, but I did find a metaphor.”
  • “I am intelligent enough to know that this is not ‘literature’…”

I generally caution my students not to write against the prompt. All the CAPT released exemplars are ripe with praise for each story offered year after year. But I also recognize that calling the stories offered on the CAPT “literature” promotes intellectual dishonesty.

Perhaps the distinction between literature and story is not the biggest problem that students encounter when they take a CAPT Response to Literature. For at least one more year students will handwrite all responses under timed conditions: read a short story (30 minutes) and answer four questions (40 minutes). Digital platforms will be introduced in 2014, and that may help students who are becoming more proficient with keyboards than pencils.
But even digital platforms will not halt the other significant issue with one other question, the “Connection question (#3)” on the CAPT Response to Literature:

 What does this story say about people in general? In what ways does it remind you of people you have known or experiences you have had?  You may also write about stories or books you have read or movies, works of art, or television programs you have seen.  Use examples from the story to explain your thinking.

Inevitably, a large percentage of students write about personal experiences when they make a connection to the text. They write about “friends who have had the same problem” or “a relative who is just like” or “neighbors who also had trouble”.  When I read these in practice session, I sometimes comment to the student, “I am sorry to hear about____”.

However, the most frequent reply I get is often startling.

“No, that’s okay. I just made that up for the test.”

At least they know that their story, “an account of imaginary or real people and events told for entertainment,” is not literature, either.

test

Standardized testing in Connecticut begins next month. The 10th grade students who are taking a reading comprehension practice test all look like they are engaged. Their heads are bent down; they are marking the papers.  I am trying to duplicate test taking conditions to prepare them for these exams. I also want to compare the scores from this assessment to one taken earlier in the year to note their progress.

Next month, these these students will sit in the same seats, for the same amount of time, perhaps using the same pen or pencil, but they are not the “same”. That is because they are adolescents. They are going through physical changes. They are going through emotional changes. They are are going through a period of social adjustment. Outwardly, they may look calm, but the turbulence inside is palpable.

I imagine if I could tune into their inner monologues, the cacophony would be deafening:

  • “…missed the bus!!!! No time for breakfast this morning…”
  • “…this is the biggest zit I have ever had!…”
  • “…not ready for the math test tomorrow…”
  • “….did I make the team?…”
  • “…why didn’t I get that part in the play?…”
  • “…I forgot the science homework!..”
  • “…When this test was over, I’ve got to find out who he is taking to the dance!..”
  • “…what am I going to do when I grow up?..”
  • “…should I get ride home or should I take the late bus?…”
  • “…Is she wearing the same shirt as me?…”

These students take the practice assessment like other classes of students before them. Unlike generations of students before them, however, social media makes a significant contribution to their behavior. Their access to social media updates with Facebook posts, tweets, or text messages exacerbates the turmoil and creates a social, emotional, hormonal slurry that changes hourly. 

And very soon, in one of those hours, these students will take a real state standardized test.

These factors may explain why the highs and lows of my data collection for several students bear a closer resemblance to an EKG rather than a successful corporate stock report. I may not want to count the results of an assessment for a student because I know what may have gone wrong on that day. However, the anecdotal information I have for a given student on a given day student is not recorded in the collection of numbers; measuring student performance is exclusively the number of items right vs. the number of items wrong.

Yet, there is still truth in the data. When the individual student results are combined as a class, student A’s bad day is mitigated by Student B’s good day. The reverse may be true the following week. Averaging Student A’s results with all the other members of the class, neutralizes many of the individual emotional or hormonal influences. Collectively, the effects of adolescence are qualified, and I can analyze a group score that measures understanding. Ultimately, the data averaged class by class, or averaging a student’s ups and downs, is more reliable in providing general information about growth over time.

Although I try to provide the ideal circumstances in order to optimize test scores, I can never exclude that social, emotional, hormonal slurry swirling in each of their heads. I know that the data collected on any given day might be unreliable in determining an individual student’s progress. I cannot predict the day or hour when a student should take a test to measure understanding.

How unfortunate that this is exacty what happens when students take a state standardized test on a predetermined date during an assigned hour, regardless of what turmoil might be going on in their lives. How unfortunate when that the advocates of standardized testing are never in the classroom to hear the voices in the adolescent students’ internal monologues:“….I am so tired!…..When will this be over?…Does this test really show what I know?”

The fiction selected for standardized testing is notorious for its singular ability not to challenge; these stories do not challenge political or religious beliefs, and  I have long suspected they are selected because they do not challenge academically.
My state of Connecticut has had great success locating and incorporating some of the blandest stories ever written for teens to use in the “Response to Literature” section of the Connecticut Academic Performance Test (CAPT).
The CAPT was first administered to students in grade 10 in the spring of 1994, and the quality of the “literature” has less than challenging. For example:
  • Amanda and the Wounded Birds: A radio psychologist is too busy to notice the needs of her teen-age daughter;
  • A Hundred Bucks of Happy: An unclearly defined narrator finds a $100 bill and decides to share the money with his/her family (but not his/her dad);
  • Catch the Moon: A young man walks a fine line between delinquency and a beautiful young woman (to be fair, there was a metaphor in this story)
At least three of the stories have included dogs:
  • Liberty-a dog cannot immigrate to the USA with his family;
  • Viva New Jersey-a lost dog makes a young immigrant feel better;
  • The Dog formally known as Victor Maximilian Bonaparte Lincoln Rothbaum– not exactly an immigrant story, but a dog emigrates from family to family in custody battle.
We are always on the lookout for a CAPT-like story of the requisite forgettable quality for practice when we came upon the story, A View from a Bridge by Cherokee Paul McDonald. The story was short, with average vocabulary, average character development, and average plot complexity. I was reminded about this one particular story last week when Sean, a former student, stopped by the school for a visit during his winter break from college.

The short story "A View from the Bridge" was used as a practice CAPT test prompt

The short story “A View from the Bridge” was used as a practice CAPT test prompt

Sean was a bright student who through his own choice remained seriously under challenged in class. For each assignment. Sean met the minimum requirement: minimum words required, minimum reading level in independent book, minimum time spent on project. I knew that Sean was more capable, but he was not going to give me the satisfaction of finding out, that is until A View from the Bridge.
The story featured a runner out for his jog who stopped on a bridge to take a break near a young boy who was fishing, his tackle nearby. After a brief conversation, the jogger realizes that the young boy was blind. The story concludes with the jogger describing a fish the blind boy had caught but could not see. At the story’s conclusion, the boy is delighted, and the jogger reaffirmed that he should help his fellow man/boy.
“The story A View from the Bridge by McDonald is the most stupid story I have ever read,” wrote Sean in essay #1 in his Initial Response to Literature.
“I mean, who lets a blind boy fish by himself on a bridge? He could fall off into the water!”
I stopped reading. How had I not thought about this?
Sean continued, “Also, fishhooks are dangerous. A blind kid could put a fishhook right into a finger. How would he get that out? A trip to the emergency room, that’s how, and emergency rooms are expensive. I know, because I had to go for stitches and the bill was over $900.00.”
Wow! Sean was “Making a Connection”, and well over his minimum word count. I was very impressed, but I had a standardized rubric to follow. Sean was not addressing the details in the story. His conclusion was strong:
“I think that  kid’s mother should be locked up!”
I was in a quandary. How could I grade his response against the standardized rubric? Furthermore, he was right. The story was ridiculous, but how many other students had seen that? How many had addressed this critical flaw in the plot ? Only Sean was demonstrating critical thinking, the other students were all writing like the trained seals we had created .
One theory of grading suggests that teachers should reward students for what they do well, regardless of a rubric.So Sean received a passing grade on this essay assignment.  There were other students who scored higher because they met the criteria, but I remember thinking how Sean’s response communicated a powerful reaction to a story beyond the demands of the standardized test. In doing so, he reminded me of the adage, “There are none so blind as those who cannot see.”

Beware the Ides of March!
March Madness!
Mad as a March Hare!

Why so much warning about March?
Well, here in Connecticut, our students are preparing for the Connecticut Mastery Tests (CMT) in grades 3-8 and the Connecticut Academic Performance Test (CAPT) in grade 10 which are given every March. While every good teacher knows that “teaching to the test” is an anathema, there is always that little nagging concern that there should be a little practice in order to anticipate performance on a standardized test. So, we “practice” to the test.

In English, 10th grade students participate in a Response to Literature section of the test where they read a selected fiction story (2,000-3,000 words; RL 10th) and respond to four questions that ask:

  • a student’s initial reaction;
  • to note a character change or respond to a quote;
  • to make a connection to another story, life experience, or film;
  • to evaluate the quality of the story.

Unfortunately, an authentic practice for this test is time consuming, requiring 70 minutes which includes the reading of the story and the four essays, roughly a full hand-written page response to each question. Needless to say, our students do not like multiple practice tests for the CAPT, so developing the skills needed to pass the Response to Literature must be addressed throughout the school year.

When practice time does arrive,  students can be “deceived” into CAPT practice through technology. We have been trying two abbreviated practice approaches using our class netbooks where students actively read a text using hyperlinks or use quiz/test taking software. In these practice assessments the student responses are typed and shorter in length, but still cover the same questions. A hyperlinked test practice, including the sharing of results, can be done in one  40 minute class period.
In the first approach, we select a short story that can be read in under 15 minutes and embed questions at critical points in the text that are tied directly to the Response to Literature questions. The students then respond to these questions as they read. The easiest software to use in creating a hyperlinked text is Google Documents using the “form” option to create individual questions. Each question’s URL link can be hyperlinked at specific moments in the text. An example is seen below. Multiple choice , scale or grid question are alternate selections that can be embedded in a story in order to provide a quick snapshot of a group’s understanding by looking at the “show summary of responses” option once the assessment is complete. There are many short stories in the public domain which can be posted on a site such as Google Docs for  student access in order to not conflict with copyright laws.

The second approach uses quiz and test taking software, such as Quia, where a teacher can paste sections of the text with question posed at the end of each section. Ray Bradbury’s All of Summer in a Day  (under Creative Commons license) is one story we are currently using for CAPT practice next week; the practice test (section seen below) can be taken at http://www.quia.com/quiz/3525412.html

The use of hyperlinks to monitor student understanding or to practice a procedure that will be helpful in a standardized test is not difficult to implement. Teachers are able to choose the kinds of questions and the placement of questions at critical sections of a text, and students like the ability to respond as they read in short answers rather than in practice essays.

While there is nothing that can be done to stop the onslaught of tests that come in March, the embedded hyperlink provides ways to satisfy that urge to practice and still engage the students.  You can even try a hyperlink response to a text by clicking here!

See? Wasn’t that easy?