A great deal of criticism is being leveled at the current vogue of high-stakes testing in elementary and secondary schools. And rightly so. I've actually blathered on about this issue on the Farm, but I've also been trying to figure out how to measure student learning without falling into the quagmire in which public primary and secondary schools find themselves. The push for numbers that reflect how students learn has produced an education system (if we can call it a "system" at all) that favors quantity over quality, and that makes good on its promise that no child left behind only by leaving them all behind together. Teachers, quaking in their boots over the possibility of being fired if their students perform poorly on poorly designed exams, teach only what the test requires, and the little automatons march out of school in tidy rows, knowing a few facts but having no idea of how to think.
And then they march on into college having also been sold on touchy-feely visions of their own worth as persons, given prizes for being there, and patted on the back and awarded a medal when they "try." To do otherwise smacks of elitism, and violates some bizarre notion of egalitarianism that equates existence with excellence.
And then we get 'em. What's a college teacher to do, when faced with a classroom full of shining faces that expect to be given grades, and rewarded with advancement simply for showing up?
A competitive classroom also can provide a rude awakening to the pampered, who tend to be ill prepared to fight their own battles or account for how they arrived at a particular solution. "I like the feel of it," they say, unable to explain why. Or even what "feel" means in this context.
So we deal with the situation however we can. We start them over, in essence, in order to give them some idea of how things will be when the enter the "real world" of the marketplace, using every tool at our disposal to help them learn how to take constructive criticism, analyze and solve design problems, or even help them overcome twelve years worth of not learning how to learn.
Every five years or so, along come the accreditation folks, to assess how we assess our students. Show me the numbers that indicate that you're constantly improving. Show me the tests, the charts, the rubrics, the data. But those of us who don't collect "data," who follow a more evolutionary process in our teaching, are left with a conundrum. Do we fall into the trap, contriving tests to "measure" what our students learn? Or do we continue to follow our teacherly instincts and use our training to understand what's going on in our classrooms, and try to address issues as they arise, term by term?
In wrestling with the fact that accreditation is here to stay, and that it does serve a purpose (it keeps some of the really shoddy operations at bay, and maintains basic standards that offer students assurance of at least a measure of quality), I've spent a long time thinking about how to assess the progress students actually make in my classes, but in an authentic, meaningful way. I can't really do it with numbers, other than by keeping track of grades, so there has to be a way to account for what goes on that doesn't rely solely on instruments like Scantronable tests.
As I looked back on how I grade and how I prepare for classes each quarter, I noticed a pattern. At about week 7 (of 11) I invariably start making notes about what's working and what isn't. One measure is to look at how students perform on particular exam questions at midterm. If a large proportion don't "get it" I examine my presentation of the material, the wording of the question, and even the classroom environment: the dynamics of a particular group, the performance of the technology I'm using, whether or not students are using computers to take notes, etc. Sometimes I can quickly determine the source of the problem and address it. I always go over the questions after the exam to obtain the students' perspectives, and then make note of the need for change the following quarter, or even on the final exam.
This had never been a particularly organized or formal process--just something I've done as a matter of course. And then I realized that the narrative I write every year for my performance review, about the year's successes and failures, reflected what I had learned from this informal process.
With these two basic elements in mind, I set out to create a template for a more structured process, and I came up with the following elements. I call it "Reflective Assessment" because it's based on the process of thinking back, and then thinking forward. It's probably something we all do, and it's a perfectly natural and certainly more "authentic" way to ascertain how well our students are learning than simply charting numbers based on tests that can't really measure thinking in the first place. As thought-provoking as I try to make my exam questions, they're never going measure much more than how well students have taken notes and paid attention.
The first step is to maintain a notebook in which to jot down issues that arise during the quarter. Sometimes these notes show up on sticky-pads and such, but these can be pasted into the notebook, along with miscellaneous scraps of paper we happen to have on hand while watching a film or during a lecture. The idea is to divide the notebook according to the courses we teach--one section for every preparation--as a way of centralizing what comes out of the everyday activity of lecturing, demonstrating, discussing, and whatever else we do as teachers.
The notebook then becomes the basis for a more formal reflection and assessment at the end of the quarter. I tend to do this anyway, as I update each syllabus, schedule, and (because I have internet pages for each course I teach) website, but the notebook now makes it much simpler to remember what needed changing, and often contains suggestions for how to make the changes.
In order to tie the assessment into the objectives of the course, a copy of the course rubric is should be included, along with the stated outcome objectives and exit competencies.
Before my annual performance evaluation, I now use the notebook, where I also record the changes I've made, to describe what has occurred in each of my courses over the year. In future, I can see the notebook's becoming a tool for developing new courses as well, and I'm planning to add a "curriculum development" section--at least until those ideas overflow the space allowed them and need a notebook of their own. I'm starting to write a quarterly assessment for each class, based on the notebook, with plans to consult it before I create my midterm exams, in order to make sure I'm attending to previous issues.
The main problem I've encountered with this system so far is that the notebook isn't always handy, which is why I like to have sticky-notepads among the materials I take to class, on my desk at school, and at home. I have one nearby when I'm grading exams and essays, too. The pads are getting bigger (the little square ones aren't all that useful), but it's rather entertaining to have a bunch of colorful bits stuck higgledy piggledy in the notebook--where they can be re-arranged as necessary. Different colored pads for different classes might be a good idea, too, but I'm not that organized yet. Mostly I use the free ones I get from publishers.
Since the process is fundamentally dialectical, the final step should involve discussing the results of each course assessment with the program or department Director, perhaps in conjunction with the year-end review. His or her comments could be added to one's teaching portfolio (I keep my narratives in mine), and the results of these conversations would provide an overview of the instructor's ongoing assessment process over the years.
The only drawback I can see to formalizing the process is that SACS evaluators might look askance at having to read through the notebooks, or even the performance narratives. But if they're really interested in transforming the quality of education by finding meaningful ways of improving instruction, reflective assessment will almost certainly give them a better picture of what's going on than a few pie charts would.
And then, of course, we could still illustrate each year's assessments with beautifully-designed charts and graphs that mark the rise and fall of grades and the vicissitudes of teaching in a world that doesn't really seem to want to learn. But that's another problem, for another day.
Image credit: Wall painting portraying a scholar, from Herculaneum, first century CE. Wikimedia Commons.
Subscribe to:
Post Comments (Atom)
2 comments:
Thought you might enjoy this piece on words like higgledy-piggledy:
http://www.good.is/?p=14517
Well, Mark, I have no idea why this might pertinent to this post, but I'll leave it, given my respect for Good. Otherwise it seems a bit fishy-wishy.
Post a Comment