Archives For rubrics

This summer, I plan to spend time organizing question stems to spark critical thinking and post them on a number of slides to share with teachers.
OR
I could shorten the process and use just one slide. I could ask one question that is guaranteed to drive critical thinking. I could ask:

So what?”

To be honest, the first time I was asked this question in an academic setting, I was appalled. I felt I was being taunted. I was sure the professor was just being rude.

I was uncomfortable…I could not give an effective response.

“So what?”

I hated the question. I hated that the professor was goading me. I hated Dr. Steven D. Neuwirth. 

I was taking a graduate course (560) Literature of the American South, what I thought would be a “fun” course as I completed my Master’s Degree in English.

I remember distinctly the moment that was not fun…the evening of the second class.

“So what?” Dr. Neuwirth wrote on the chalkboard; he snapped a piece of chalk as he underlined the question for emphasis.

So what? he repeated in class after I offered what I thought was a brilliant observation on the evidence of dignity as a character trait in a discussion on William Falkner’s As I Lay Dying.

I was irritated. I had worked very hard on my responses.

So what? he scrawled in big letters on the paper I handed in three weeks later.

I was angry. I had worked even harder on that response.

My frustrations continued. Nothing in my training had prepared me for his persistence with the So what? question.

I had done what had worked in every other class. I had developed a thesis. I had used evidence. I had proved my thesis.

Regardless, my answers did not satisfy his challenge. So what? He found my reasoning lacking, and because he was not satisfied, neither was I.

I needed to think how to explain better.
I had to think differently.
I had to think critically.

It was then I realized that Dr. Neuwirth’s So what?” question was making me think critically.

Dr. Neuwirth’s irritating challenge brought me to recognize that it was not enough for me to develop and prove a thesis in a paper. I had to prove why my argument mattered.

For example, it was not enough to prove that Faulkner’s characters displayed dignity despite their social status, I had to question so what is the reader to take from his writing?

I had to ask the question So what?” not with attitude but with curiosity. Curiosity led to inquiry:

  • So what was my point? 
  • So what was missing from my response?
  • So what should I want the reader to know or do?
  • So what happens next?
  • So what do I do to cause or prevent something from happening ? 
  • So what makes this work or not work?
  • So what will this information lead me to study next?

Such inquiries led to me to make conclusions. I had always found conclusions difficult to write. I had always followed the predictable formula of restating the thesis, but I found that when I used the critical question So what? I could offer a broader conclusion.

For example, when I developed a thesis on the dignity of Faulkner’s characters and provided evidence from the text, I was really posing the question “Why should anyone read novels by Faulkner?” When I asked myself so what? I could conclude that Faulkner’s characters spark empathy in the reader.

It turned out that I did not hate theSo what? question.

I did not hate Dr. Neuwirth …although, admittedly, liking him took a little longer. While I did understand the importance of being challenged, I still found him a brilliant but abrasive teacher.

Four years after that class, I  became a teacher, and I taught literature. My students wrote predictable and boring conclusions that restated the thesis. They were not thinking critically. I had to do something.

Dr. Steven Neuwirth, Western Connecticut State University-created the University's Honors Program and served as its first director; he passed away February, 2004.

Dr. Steven Neuwirth, Western Connecticut State University-created the University’s Honors Program and served as its first director; he passed away February, 2004.

I asked my students So what?

And I scrawled So what? on their papers.

And I wrote So what? on the Smartboard -without chalk.

My students also hated theSo what? question.

They complained to me, but their conclusions improved.

So here is one question, one irritating question, for critical thinking for sharing on one slide:

So what?

 

 

Screenshot 2014-03-21 21.09.03Our school has been preparing for an accreditation by the New England Association of Schools & Colleges, Inc. (NEASC), and that means two things:

Housecleaning and housekeeping.

The housecleaning is the easy part. A great deal of time and effort has been spent on making the school look nice to the accreditation team. Considering that our campus is in the bucolic Litchfield Hills of Connecticut, we had a great start. Our building is extremely well maintained, and our maintenance staff has been recognized for their “green” maintenance policies. The final details to housecleaning were the addition of student art on the walls and a large canvas featuring the student designed logo that centers on the motto “Quality, Academics, Pride.”

Preparing the housekeeping was different. Housekeeping required that all stakeholders in our school community reflect on how well do we keep our “house”-our school- running. There have been meetings for the past two years: meetings with community members,  meetings with students, meetings with teachers across disciplines. There have been committees to research eight topics:

  • Core Values, Beliefs, and Learning Expectations
  • Curriculum
  • Instruction
  • Assessment of and for Student Learning
  • School Culture and Leadership
  • School Resources for Learning
  • Community Resources for Learning

After all the meetings came the writing of the reports, and after all the reports came the gathering of the evidence. Finally, the evidence sits in eight bins in a room in the agricultural wing of the school ready for the volunteer accreditation team to review.

What is most striking about the collected evidence is the variety. The evidence today contrasts to the evidence from the accreditation several years ago.  For each lesson plan on paper, there is a digital lesson plan. For each student essay drafted, peer-reviewed, and handwritten on composition paper, there is a Google Doc with peer comments, and to see each draft, one need only check “see revision history.” Whether or not members of the NEASC committee check the revision histories of individual documents is not as important as how they will check the history we have provided in the evidence bins and websites. In looking at the evidence, the NEASC committee will note our academic housekeeping, and they will make recommendations as to how we should proceed in the future.

The entire school community has every right to be proud of Wamogo Regional High School, and recommendations from NEASC will help guide us in the future. But for tonight, the housecleaning and housekeeping is over.  

A message from the Vice Principal arrived by e-mail tonight; she sums up the experience:

When driving home from school this evening, I was thinking about the arduous process we have all been engaged in over the past two years.  I don’t believe there is a single member of our school community that hasn’t played a part in this important preparation.  Many of you worked tirelessly on committees, writing reports, culling evidence, hanging student work, etc., etc., etc.  I just wanted to take a moment and  thank the entire Wamogo community for the rally we have all engaged in to prepare for this important visit.  I know that the visiting school will easily see what a special place Wamogo is and the obvious talents of our staff and students.  I am extremely proud of our school and want you to enjoy showing the visiting committee what wonderful work you are doing with our students.

Welcome, NEASC. Our house is ready.

Ode on Grading (Earned)

January 28, 2013 — 2 Comments

The semester just ended, and there are papers to grade. In addition, the midterms are done, and there are essays and papers to grade. I am surrounded by paper. A recent article titled “Why Teachers Secretly Hate to Grade Papers” by John T. Tierney in The Atlantic received quite a bit of buzz, with most teachers flat out saying, “Secretly? There is nothing secret about our hating to grade!”

The article discussed the inability to be fair when grading, but I particularly enjoyed the following paragraph:

The sheer drudgery and tedium. When you’re two-thirds of the way through 35 essays on why the Supreme Court’s decision in the case of McCulloch v. Maryland is important for an understanding of the development of American federalism, it takes a strong spirit not to want to poke your eyes out with a steak knife rather than read one more. I have lots of friends who are teachers and professors. Their tweets and Facebook status updates when they’re in the midst of grading provide glimpses into minds on the edge of the abyss — and, in some cases, already deranged.

Since several of my classes are deep in the Odyssey, the “poke your eyes out” reference kicked all my Greek allusions into high gear. Consequently, instead of full-fledged blog post that will drain me of the minutes I have before grades are due, I leave you with a quick poetic attempt to capture my grading frustration:

Tantalus Has It Easy

My desk is piled high
with papers and essays that had been assigned
during the Christmas break,
when Dawn spread her rosy fingers on the
new year calendar empty of responsibilities.

Sing in me, Muse, and tell me
What was I thinking? an invocation I repeat
with each carefully completed grading rubric
stapled to a hastily penned paper.

More than one paper bears the correcting
suggestions I had made days ago without
the corrections I suggested. I am Cassandra,
unhappy prophetess whose warnings
go unheeded.

I hear a teacher’s scantron sheets
click noisely in the teacher’s room next door.
“Grading’s done,” he chortles, while I am
caught between the Scylla of unintelligible answers
and the Charybdis of illegible handwriting.

I see the PE teacher leaving early to workout
the stress of the week at the local fitness club.
Apparently, fate favors
the Olympically-sculpted

While I, like Sisyphus,upload_6i2s45k8nordin9cemradc8sv7249883.jpeg-final
reach for another paper to roll up
the grading curve.

Bond.

James Bond.

007.

On Her Majesty’s Secret Service, and (surprise!) a metaphor for why relying on the standardized test is flawed.

Honestly, I was not expecting Skyfall, the latest James Bond blockbuster, to resonate with issues being discussed in educational reform today, but sitting in the darkened theatre, I suddenly heard the same concerns about the validity of tests used in assessing secret agents that I hear in assessing students.

Apparently, M-I 6 wrestles with the question: Do tests really measure ability?

Spoiler Alert! If you are someone who intends to see the film, I may be giving away a few facts; not major plot points, but a few incidental pieces of information. Bond Purists-stop reading now, please.

Before Bond (Daniel Craig) returns to work for M (Dame Judi Dench), he needs to pass a set of standardized performance tests. He is first put through a series of grueling fitness tests. He is tested on his ability to shoot a pistol at various distances in a firing range. Finally, he faces a series of psychological tests. The results of how well he succeeded in this battery of objective tests is initially kept from the audience, but the viewers are not surprised when he eventually returns to service.

Painting at the National Gallery in London

The film’s screenwriters saw fit to combine the concerns about the results of these tests with M-I 6’s concerns about Bond’s age. No scene is more direct in confronting Bond’s age than in his first meeting with the young gadget supplier “Q”. The filmmakers placed Bond at a British National Gallery sitting on a bench looking at J.M.W. Turner’s painting Fighting Temeraire Tugged to Her Last Berth To Be Broken Up, 1838 .

Turner’s symbolic message of the painting depicts the shift from sail power to coal engine, the billowing white clouds swirling like sails a stark contrast to the blackened smokestack of the tug in the forefront of the painting.  Q enters, sits next to Bond, and strikes up a conversation:

Q: It always makes me feel a bit melancholy. Grand old war ship. being ignominiously haunted away to scrap… The inevitability of time, don’t you think? What do you see?
Bond:  A bloody big ship. Excuse me.
Q: 007. I’m your new Quartermaster.
Bond: You must be joking.
Q:  Why, because I’m not wearing a lab coat?
Bond: Because you still have spots.
Q: My complexion is hardly relevant.
Bond: Your competence is.
Q: Age is no guarantee of efficiency.
Bond:  And youth is no guarantee of innovation.

Skyfall (http://www.imdb.com/title/tt1074638/quotes)

Of course, the M-I 6 tests are designed to determine if Bond is too old, if his brand of “boots on the ground” spying should be replaced by agents in command of newer technologies. And of course, M is obligated to submit Bond to the required standardized tests, tests given on one particular day. However, she is not obligated to act on the results of the tests.

M’s response, therefore, is to weigh what audiences know are the 50 years of evidence on Bond’s unconventional performance as a creative problem solver. She recognizes that Bond possesses those intangible qualities of initiative and drive, and while a standardized test does measure a level of ability, what makes Bond a valuable British agent is his ability to confound a standard.

Watching James Bond puzzle the test-driven establishment is a large part of the enjoyment for the audience. Agent 007 cannot be limited by a test score if he is going to save the free world.

Which brings me back to the shared message about testing from Skyfall and its application to education reform. The audience understands that the testing in Skyfall is flawed because of the limited results; standardized testing in education is similarly limited. Like M, educators should not let their students be defined by test scores from standardized tests, those single metric assessments given on one day. Like M, educators should pay more attention to having students develop problem-solving skills and to consider other assessments that measure students’ critical thinking skills.  Students should have the opportunity to be evaluated on the intangible qualities of initiative and drive through project-based learning. Like Agent 007, students should be allowed the opportunity to confound those standards measured by objective testing.

Oh, and maybe they could also ask for their chocolate milk shaken, not stirred.

Is this the Age of Enlightenment? No.
Is this the Age of Reason? No.
Is this the Age of Discovery? No.

This is the Age of Measurement.

Specifically, this is the age of measurement in education where an unprecedented amount of a teacher’s time is being given over to the collection and review of data. Student achievement is being measured with multiple tools in the pursuit of improving student outcomes.

I am becoming particularly attuned to the many ways student achievement is measured as our high school is scheduled for an accreditation visit by New England Association of Schools and Colleges(NEASC) in the Spring of 2014. I am serving as a co-chair with the very capable library media specialist, and we are preparing the use of school-wide rubrics.

Several of our school-wide rubrics currently in use have been designed to complement scoring systems associated with our state tests,  the Connecticut Mastery Tests (CMT) or Connecticut Academic Performance Tests (CAPT). While we have modified the criteria and revised the language in the descriptors to meet our needs, we have kept the same number of qualitative criteria in our rubrics. For example, our reading comprehension rubric has the same two scoring criteria as does the CAPT. Where our rubric asks students to “explain”, the CAPT asks students to “interpret”. The three rating levels of our rubric are “limited”, “acceptable”, and  “excellent” while the CAPT Reading for Information ratings are “below basic”, “proficient”, and “goal”.

We have other standardized rubrics, for example, we have rubrics that mimic the six scale PSAT/SAT scoring for our junior essays, and we also have rubrics that address the nine scale Advanced Placement scoring rubric.

Our creation of rubrics to meet the scoring scales for standardized tests is not an accident. Our customized rubrics help our teachers to determine a student’s performance growth on common assessments that serve as indicators for standardized tests. Many of our current rubrics correspond to standardized test scoring scales of 3, 6, or 9 points, however, these rating levels will be soon changed.

Our reading and writing rubrics will need to be recalibrated in order to present NEASC with school-wide rubrics that measure 21st Century Learning skills; other rubrics will need to be designed to meet our topics. Our NEASC committee at school has determined that (4) four-scale scoring rubrics would be more appropriate in creating rubrics for six topics:

  • Collaboration
  • Information literacy*
  • Communication*
  • Creativity and innovation
  • Problem solving*
  • Responsible citizenship

These six scoring criteria for NEASC highlight a gap of measurement that can be created by relying on standardized tests, which directly address only three (*) of these 21st Century skills. Measuring the other 21st Century skills requires schools like ours to develop their own data stream.

Measuring student performance should require multiple metrics. Measuring student performance in Connecticut, however, is complicated by the lack of common scoring rubrics between the state standardized tests and the accrediting agency NEASC. The scoring of the state tests themselves can also be confusing as three (3) or six (6) point score results are organized into bands labelled 1-5. Scoring inequities could be exacerbated when the CMT and CAPT and similar standardized tests are used in 2013 and 2014 as 40 % of a teacher’s evaluation, with an additional 5% on whole school performance. The measurement of student performance in 21st Century skills will be addressed in teacher evaluation through the Common Core State Standards (CCSS), but these tests are currently being designed.  By 2015, new tests that measure student achievement according to the CCSS with their criteria, levels, and descriptors in new rubrics will be implemented.This emphasis on standardized tests measuring student performance with multiple rubrics has become the significant measure of student and teacher performance, a result of the newly adopted Connecticut Teacher Evaluation (SEED) program.

The consequence is that today’s classroom teachers spend a great deal of time reviewing of data that has limited correlation between standards of measurement found in state-wide tests (CMT,CAPT, CCSS) with those measurements in nation-wide tests (AP, PSAT, SAT, ACT) and what is expected in accrediting agencies (NEASC). Ultimately valuable teacher time is being expended in determining student progress across a multitude of rubrics with little correlation; yes, in simplest terms, teachers are spending a great deal of time comparing apples to oranges.

I do not believe that the one metric measurement such as Connecticut’s CMT or CAPT or any standardized test accurately reflects a year of student learning; I believe that these tests are snapshots of student performance on a given day. The goals of NEASC in accrediting schools to measure student performance with school-wide rubrics that demonstrate students performing 21st Century skills are more laudable. However, as the singular test metric has been adopted as a critical part of Connecticut’s newly adopted teacher evaluation system, teachers here must serve two masters, testing and accreditation, each with their own separate systems of measurement.

With the aggregation of all these differing data streams, there is one data stream missing. There is no data being collected on the cost in teacher hours for the collection, review, and recalibration of data. That specific stream of data would show that in this Age of Measurement, teachers have less time for /or to work with students; the kind of time that could allow teachers to engage students in the qualities from ages past: reason, discovery, and enlightenment.