In south Alabama we sometimes refer to someone as “the real deal.”  In other words, they know what they are talking about.  Dr. James Popham is in this category.  Professor emeritus at the graduate school of education and information studies at UCLA, he is a past president of the American Educational Research Association.

Having heard many discussions about how we are misusing student test scores these days, when I happened upon an article in Education Week by Dr. Popham titled The Fatal Flaw of Educational Assessment, I stopped to take a look.

Here is some of what I found:

“America’s students are not being educated as well these days as they should be. A key reason for this calamity is that we currently use the wrong tests to make our most important educational decisions. The effectiveness of both teachers and schools is now evaluated largely using students’ scores on annually administered standardized tests, but most of these tests are simply unsuitable for this intended purpose.

(Sounds as if he is talking directly to those who drafted the RAISE/PREP Act.)

“When we use the wrong tests to evaluate instructional quality, many strong teachers are regarded as ineffective and directed by administrators to abandon teaching procedures that actually work well. Conversely, the wrong test scores often fail to identify truly weak teachers—those in serious need of instructional assistance who don’t receive help because they are thought to be teaching satisfactorily. In both these instances, it is the students who are shortchanged.

“Today’s educational tests are intended to satisfy three primary purposes, to compare, to instruct, and to evaluate.

“Comparison-focused educational tests permit us to identify score-based differences among individual students or among groups of students. The resulting comparisons often lead to classifications of students’ scores on a student-by-student basis (such as by using percentiles) or on a group-by-group basis (such as by distinguishing between “proficient” and “nonproficient” students).

“A second purpose is instructional—that is, to elicit ongoing evidence regarding students’ levels of achievement so that better decisions can be made about how to teach those students.

“A third purpose is evaluation—determining the quality of a completed set of instructional activities provided by one or more teachers. These evaluations often focus on a lengthy segment of instruction, such as an entire school year.

“The trouble is that one of these purposes—comparison—has completely dominated America’s educational testing for almost a century.

“However, tests built chiefly for comparisons are not suitable for purposes of instruction or evaluation of instructional quality in education. These tests provide teachers with few instructional insights and typically lead to inaccurate evaluations of a teacher’s instructional quality.

“The time has come for us to abandon the naive belief that an educational test created for Purpose X can be cavalierly used for Purpose Z. Too many children in our schools are harmed by these methods because educators are basing their decisions on inaccurate information supplied by the wrong tests. We must follow the up-to-date advice of the measurement community and demand the use of purposeful educational testing.”

One of the key components of RAISE/PREP is using student test scores to determine how good their teacher may, or may not, be.  But as Dr. Popham points out, we are trying to force a round peg into a square hole.  And as we say again in south Alabama, “that makes no sense.”