Monday, August 25, 2008

Assessing Assessment

I was preoccupied last week with helping to fight fires at school, lit by the departure of not one but two academic directors, and the chaos of a fairly new phenomenon: what's called a "mid-quarter start," in which new students enter the program and take a couple of classes twice a week for five weeks. Two hundred newbies descended into our hallowed halls, causing overflows in the already-packed parking garage, as well as disconcerting room changes. In all fairness, I don't think anybody expected 200 people to take advantage of the opportunity to enter at mid-term. But the problem's exacerbated by the necessity of sharing our facilities with another college in the corporation's stable. Things should get better when they get their own building (Somewhere Else, we sigh, collectively). The new students, though, seem to be dealing well with the confusion, and the "old" ones have grumbled but gone on with their work. We are, on the whole, an adaptable lot.

In the meantime, we're halfway through the normal quarter, and chest-high in assessment activities, both institutional and pedagogical. The institutional part seems to be going along swimmingly, as we streamline our process and as the various components of the school get used to it and learn how to hold up their end. In the classroom, however, I'm in the midst of a struggle between numbers and ideas.

How do I determine if or how well my students are grasping the information and ideas I'm presenting in class? A few years ago, as my classes grew in size and my level-two students were clearly not absorbing the material we covered in the level-one class, I began (reluctantly) to test them. I'd much better ask for design projects that test their ability to apply principles to real-world problems, but they ended up only learning what they needed for a given project.

My initial exams were really tough, and resulted more in abject terror than in real learning (even though I weighed exam grades very lightly in the end). So I decided to couple my insistence that students maintain workbooks of images and notes with exams both at midterm and at the end of the quarter. Here the results were much more satisfactory, especially since I allowed them to use their workbooks to take exams and tied their scores together: what you get on the exam is what you get on the workbook, since the exam is only going to be as good as the workbook.

For a fair amount of time, this seemed to work. Students diligently inserted images into "image lists"--pre-printed forms that list basic information about each work we view, with space for a sketch or a picture of the object, and another space for notes. To help them out, I link every image I show in class to a source on the web. Try as I might, I can't get most of my students (who are already paying a premium to attend the school in the first place) to plunk down $130 for a survey textbook and another $90 on a history of graphic design. So I maintain a course website with resources to help them locate the images and additional information--and to show them what reliable information on the web looks like.

As I said, this worked for a while. But lately I've noticed that some students are blowing off completing the workbooks, imagining (on what evidence I can't imagine) that they'll remember all the images and information and be able to take the test without engaging in the process I designed to help them absorb what they need. No amount of describing my own experience in art history classes (or descriptions of having to walk to class nine miles uphill in the snow) seem to be persuasive enough to get them to accomplish the task independently.

And so I am instituting a separate workbook grade, based on completion of the slide lists, to be assessed after each exam, at least in the first-level course. I'm doing this in direct response to an observed phenomenon with clearly apparent impact on the quality of learning that takes place in my classroom.

Why am I telling you all this? Isn't this what we always do? My answer to both of these questions is that this is an example of the kind of assessment most teachers automatically do. We don't need statistics to tell us when things are going badly or well. But the Assessment Regime (I'm borrowing this term from a dean at a local community college) maintains that without numbers, we have no way of tracking whether or not our students attain measurable outcomes. This assumption is, of course, poppycock--except to those who seem to have spent so little time in the classroom that they really do not understand how it works, or who have focused almost exclusively on theory without much in the way of practice.

I'm not saying that all teachers, just by being teachers, are capable of this kind of reflective assessment (which is what I've begun to call it). It takes practice and experience--but it can be taught by example (and every teacher needs to spend time with a mentor, or have good models to emulate). But being able to realize when students are not learning as well as they should be is a basic qualification for teaching. Anyone who can't see or who ignores signs of trouble shouldn't be a teacher in the first place.

I'll continue this topic later, and pose some further questions about effective ways of gaging student success (another phrase that's become a buzzword) that don't involve their taking standardized, one-size-fits-all exams. But I wanted to inject this particular notion into the conversation: that good teaching requires a kind of internalized thermostat capable of setting off alarms when negative change occurs, and that alerts us when a particular stratagem works especially well. It may not be measurable in any quantitative way at all, except that it's grounded in carefully designed parameters of success and failure, such as in an outcome rubric that indicates expectations and outlines the means of accomplishing goals.

We are only wise, Plato taught, when we recognize the limits of our own understanding. Knowledge requires lifelong learning, the continuing use of what we learn at any given point. Assessment needs to be processual--not marking particular achievements at particular points, but rather establishing the ground for ongoing, complex learning. If my students leave my classes with a basic vocabulary and a foundational set of critical skills, I don't really care if they remember the exact date on which Pablo Picasso finished the Demoiselles d'Avignon. But they'd bloody well better remember how important that painting was to the future of art, and what made it possible for Picasso to paint it in the first place. How does one test something like that on a multiple choice or a true/false exam?

Image credit: Detail of Raphael's School of Athens in the Stanza della Segnatura of the papal apartments in the Vatican, featuring Plato (left) and Aristotle, two of the first educational theorists, both of whom were quite good at the practice of teaching as well--judging from their students. Wikimedia Commons.

2 comments:

krimzon11 said...

I feel your pain. The plight of yours and the other instructors at the institution has an affect on the students too. One of the problems I see, and have experienced, is the "sales goals" which seem to be driven by the business side of the institution.

I believe that everyone has a right to choose what they want to pursue. But at a certain point there has to be a line drawn between realism and fantasy. To me, the people in admissions shouldn't be forced to push the biggest fad down the throats of every potential student that walks through the door. They should instead, truly assess the skills of the students and give them an honest opinion of what they should pursue, or what kind of effort they really need to put forth.

Right now, there are peers of mine that really don't belong to this school, not because they don't have talent, or passion or drive, but because they've been mislead. I suffered that experience myself, finding out around the time I got my first degree, that I really didn't like the program at all and it was a chore to even start up the software to get any work done.

This also contributes to the numbers game that you are forced to play. Students are guided down a track that isn't suited for them, they become burned out and stop trying, and then the instructors have to deal with the assessment dilemma that you described.

In the meantime, I try do do all I can to help my peers, by trying to get them to answer their own questions on whether they should be there or not, instead of looking to others to make the decision for them.

Owlfarmer said...

Thanks for this very thoughtful response. The balance that proprietary schools have to maintain is difficult, and I think we've come a long way toward providing better initial information than we once did. But I wholly agree that we need to do a much better job of matching students to programs.

I also discovered the real advantage to peer involvement this quarter, when I sat in on a class for which I was vastly under-prepared. My fellow students rallied to my aid and did a wonderful job of helping to teach me some of what I was missing. I ultimately had to drop the class because of time and energy constraints, but the lessons were valuable nonetheless.

I'm positive that what you provide your own peers is a significant portion of what makes this school successful. And your input is equally valuable, because it'll help me bring some of these matters up in the appropriate venues. Thanks again!