|Courtesy of funnyjunk.com.|
Among other random pursuits this summer, I’ll be spending a week helping to cook up fresh curriculum for my school district. It used to be that curriculum-writing gigs offered a fun, relatively informal opportunity to network and talk subject-matter with colleagues with whom we wouldn’t otherwise get to work; a chance to recharge, rethink, and reconnect with what we teach outside the daily grind of the classroom. And while I’m hopeful that’ll still be the case this year, I also fear this is yet another realm of teaching which the deadly New Clinicism has managed to pollute.
Case in point: the executive leadership has saddled each of us with a mint copy of The Art and Science of Teaching beforehand, with a directive to meditate carefully on the proficiency scale described in Chapter 1. It seems the purpose of this summer’s overhaul isn’t the curriculum content per se, but rather reorganizing that content into discrete learning goals that can be tracked and measured. We’ll also tackle the thorny problem of how to reconcile Bob Marzano’s generic four-point scale with traditional 0-100 grades. Something tells me that this time around there will be scant discussion of composition and literature and all too much discussion of the usual, ubiquitous gurus and gimmicks.
In ASOT and other texts, the Good Doctor assures us that proficiency scales are good for helping students to track their own learning, and that having students do so yields an average—wait for it—32-percent gain in student achievement. Like many other superstar researchers, Marzano clouts us with precise figures while avoiding precise definitions—like just what in the hell is meant by “achievement.” The 1986 study which Marzano often cites on this topic, for instance, employs the terms “student achievement,” “student progress,” “goal attainment,” and “educational effects” interchangeably without stipulating what any of these measures is supposed to, you know, measure. We might surmise that they refer to the usual pre-test/intervention/post-test structure, but one must hunt down the original twenty-one other studies to find out (the Fuchs and Fuchs paper is a meta-analysis). In doing so, we will also find that 17 of those original studies dealt with special-education students, and that 98% of the student subjects in those 17 studies were “mildly to moderately handicapped.”
But anyhey, let’s grant the Good Doctor his premise. Most teachers would agree that encouraging students to monitor their own progress, and thus to assume more responsibility for their own learning, is a good thing. But there is surely an infinite number of ways to do this. For example, when I require students to maintain portfolios of all their essays, as I’ve often done in the past, I’m requiring them to monitor their own progress as writers. The manner in which students monitor their own learning should reflect the nature and variety of the academic disciplines themselves. In typically clinical fashion, however, the good Dr. Marzano advocates developing a four-point scale for measuring every learning objective taught in every course or unit, and then requiring teachers (and students themselves) to plot students’ individual learning in line-graph form on a standardized worksheet. The Fuchs article is especially exuberant about requiring teachers to maintain physical graphs of student progress:
When teachers were required to employ data-utilization rules, effect sizes were higher than when data were evaluated by teacher judgment.…Data-evaluation rules required practitioners to analyze student performance at regular intervals and, if the data suggested certain patterns, to introduce instructional changes into a student’s educational program. For example, Fuchs, Deno, and Mirkin (1984) required teachers to calculate a line of best fit through every 7 to 10 data points. If a line of best fit was less steep than the goal line, running from baseline to the intersection of the criterion performance and the goal date, teachers were required to institute a programmatic change. Results suggest that, in order to effect greater learning for pupils, teachers might employ explicit, systematic rules to evaluate the data they collect.…Finally, the method by which data were displayed produced a significant finding. When data were graphed, effect sizes were higher than when data were simply recorded. With graphing, systematic formative evaluation boosts the average achievement outcome score almost .8 standard deviation units over control group outcomes.
Such is the irrational rationalism teachers must contend with these days in the name of professional development. Forcing teachers to “calculate a line of best fit through every 7 to 10 data points” or perform other tortuous scientific maneuvers will not transform teaching into the precise, surgical profession the New Clinicists so keenly desire, particularly in those academic disciplines which are not, themselves, sciences. True, it’s easy enough to fabricate the illusion of science, and maybe that, after all, is the actual point. Marzano’s proficiency scale still depends on subjective teacher judgments regarding what student mastery of a given objective looks like, and it is therefore no more valid than traditional grades—but that’s okay, because “proficiency scale” just looks and sounds more sciencey. Marzano calls for making a (false) distinction between learning objectives and actual tasks or assignments, which is impossible in an academic setting; the work is the learning and the learning is the work. There’s also pesky evidence that too much “metacognition,” i.e. requiring students to “think about thinking,” has its own negative effects. But no matter. Proficiency scales are sexy, they’re sciencey, they’re data-riffic…and lest we ever forget, data is fabulous!
It demands repeating every so often: since the progressive-education era first kicked off a century ago, education researchers, as a community of professionals, have labored under a persistent credibility problem, and nothing much in the decades since has occurred to mitigate that (except, perhaps, the advent of modern marketing techniques). One reason is that the endeavor itself is misguided; the attempt to reduce all learning to strict empirical procedure ends in mockery of the deliberately non-empirical humanities, those areas of study we invented to cope with areas of life that can’t be measured or quantified. Another reason is the just as persistent hypocrisy of a profession that apparently aims to subordinate another profession (teaching) to itself, demanding that teachers abide by numerous “research-based” gimmicks du jour when the researchers/authors/gurus themselves operate in one of the most under-licensed, under-regulated, under-scrutinized sectors of education. It is this very anything-goes environment that allows brand names like Marzano the luxury of leading us and selling to us at the same time.
So please excuse me if I, like John Proctor in The Crucible, “like not the smell of this ‘authority.’” In the scheme of things, I believe my students are better served reading a play (or writing one of their own) than charting the artificial peaks and valleys of some equally artificial quest for proficiency.