Monday, August 23, 2010

Testing the Teachers

I was outraged when I learned that the head of United Teachers Los Angeles called for a boycott of The Los Angeles Times because of its "Grading the Teachers" project which, A.J. Duffy believes, “represents a continuing attack on our profession.” I want to be clear: I am not outraged because a union chief called for a job action. I am outraged because the chief representative of teachers in Los Angeles advocated interfering with two of the most fundamental privileges contained in our Nation’s Bill of Rights: the guarantee of a free press (no matter how odious) and the guarantee of free speech (no matter how obnoxious). We teach this in all of our schools. What kind of example is being set – particularly in such a highly charged situation as the Times’ article on the value-added of individual teachers – if educators not only oppose free and open debate, but actively seek to restrict it?

 Let me also be clear about another thing: I am unwaveringly convinced that we educators must be measured – in whole or in part – by our students’ performance on standardized and achievement-based tests.  Why would we want anything less than the security offered the public by the knowledge that the competence of our attorneys, accountants, physicians – even our realtors – must be measured by their performance on one or more tests? These professionals can’t even get to the point of being tested without passing through our schools. If for no other reason (and there are plenty) than to prepare future accountants and physicians for the rigorous testing they must face in order to be licensed to practice their professions, we pre-collegiate educators must emphasize the importance of every form of testing.  And, if we emphasize its importance, then we must agree to be measured and evaluated by our students’ performance on these tests.  I’m reminded of the famous Vince Lombardi quote:

If it doesn't matter who wins or loses, then why do they keep score?

Even more irritating have been quotes from various teachers, such as this one on the Time’s Blog: "Teenagers are teenagers. They are inexplicable, as are student test scores." Not, if you re-read the original story, which analyzed 1.5 million scores from 603,500 students over a seven year period. Any statistician will support the utility of examining large performance samples taken from hundreds of thousands of test-takers over an extended period of time. The argument that a “student having a bad day” explains a particularly low test score is eliminated by virtue of the patterns that emerge from large samples taken over an extended period of time. As a matter of fact, we can form conclusions about teacher effectiveness when measured against huge statistical samples. What we can’t do is predict future performance of the teacher. That is up to the teachers and their supervisors.

According to The Times, they “used a statistical approach known as value-added analysis, which rates teachers based on their students' progress on standardized tests from year to year. Each student's performance is compared with his or her own in past years, which largely controls for outside influences often blamed for academic failure: poverty, prior learning and other factors.  The utility of this approach at the elementary and middle school levels is supported by the way those grades are typically structured: language arts and mathematics (the two most common disciplines measured by many standardized tests) are typically isolated from the rest of the curriculum, making it simpler to link student mastery to content taught. It is a more challenging task at the high school level where the curriculum tends to overlap individual academic disciplines. But we do have a rough framework that can support measuring student performance in the SAT and ACT exams. These tools provide a broad indicator of student achievement near the end of their high school career. Any high school teacher who is inclined to repeat the blogger’s argument that “student test scores are inexplicable” better not do so in the presence of anxious high school parents looking at college options!

Lastly, I want to take note of the term which describes the statistical analysis used by The Los Angeles Times: value-added.  Here, at  La Salle, we would want to expand the concept of “value-added” by including the religious, artistic and athletic dimensions which contribute to the whole person walking across the stage at Commencement. Readers of this space know that I am fond of describing our Mission in this way:

We produce good students and good people.

For us, the concept of “value-added” extends well beyond the measures of a standardized test. It must include the notion that our efforts as high school educators are in vain if our students don’t leave us better able to meet the challenges, not just of college, but of life as well. If I could figure out how to test for that, I would!