If you are a fan of the multitude of police and crime scene investigation dramas currently on television, you know that critical clues about a murder can be obtained with a thorough autopsy conducted by a well-trained forensic scientist or medical examiner.
To be sure, autopsies can provide valuable information, but if you watch those scenes closely, you will see that the procedure rarely helps the patient get any better.
This conclusion can also be applied to another type of autopsy – the annual release of Oklahoma student test scores.
Tulsa World journalist Andrea Eger provided the results of this year’s autopsy of Tulsa-area public schools in this morning’s Sunday paper.
A summary of these results – an autopsy report, if you will – might sound something like this:
An external examination of the patient (students and schools) found them to be deceased. While we believe there are multiple factors and sources of trauma that may have contributed to the patient’s demise on last year’s tests, this report does not provide any meaningful insight as to what those factors might be, what effect they may have had on the patient, or what interventions may have prevented this result.
In some ways, the patient seemed to have been relatively healthy and there are multiple indicators that show positive effects from previous treatments. However, it appears that earlier interventions have largely been unsuccessful. In short – for the 20th year in a row – the patient died.
Nonetheless, education practitioners should use this data to justify doubling the dose of previous prescriptions. In other words, stay the course. Schools should continue to raise academic standards; make their assessments more rigorous; collect and analyze more data; and hold teachers and principals accountable – mainly because that’s the only treatment we can think of.
We know that this prescription has failed to work for a generation of patients. Yet, by following this plan of action, schools can at least be comfortable in reporting every year, “We’re sorry, but we did all we could.”
As always, it is important to note that these autopsy results reflect the performance of students nearly six months ago. Teachers are already six weeks into a new academic year and providing instruction to a completely different group of children than they had last April.
In other words, knowing whatever ailments affected our students last year doesn’t necessarily provide a worthwhile prescription for avoiding a similar outcome with our new students.
And, as I pointed out in my previous post, we have been reading the same autopsy results for at least two decades.
It is not just state test results that show this same trend. Let’s take a quick peek at state ACT and NAEP results for the past 20 years.
In 1996, 63% of Oklahoma high school graduates took the ACT and had an average composite score of 20.5. Twenty years later, we now have a larger number of students (82%) taking the test, but our composite average is now down to 20.4.
The National Assessment of Educational Progress, or NAEP, is called the “Nation’s Report Card.” In it’s 2015 Oklahoma Report, in nearly every area but fourth grade math, the authors found that the performance of every subgroup on the fourth and eighth grade math and reading assessments to be “not statistically different from 1998.” Moreover, the achievement gaps between white students and black, Hispanic, and high poverty students were also not statistically different from gaps measured 17 years earlier.
Therefore, after decades of accountability systems which reported on “failing schools” one year after another, we should not be surprised that these educational autopsies are without value.
As a state and nation, we have spent billions of dollars chasing higher test scores. At what point do we acknowledge all we’ve been doing is digging for fool’s gold?
So, going back to Selzer’s quote at the top, what “facts” and “truths” are evident from this year’s annual test autopsy?
- FACT: There are significant differences between the performance of students in different schools and between different demographic subgroups.
- TRUTH: Duh. Yet, in most cases, the differences in student performance are closely correlated to socioeconomic factors, family stability, mobility, and attendance. Schools have limited ability to positively impact any of these factors.
- FACT: Results of annual state testing provides the public with something for which they have a seemingly insatiable appetite – misleading rankings, sorting, charts, winners/losers, school grades, etc.
- TRUTH: Results of annual state testing tell us little about the overall quality of a school or the effectiveness of its teachers.
- FACT: Schools and teachers care a great deal about how their students perform on these tests.
- TRUTH: Too many students care very little how they perform on these tests.
- FACT: Many in the public view these test results as the most valid measure of student readiness for college and careers.
- TRUTH: They’re not.
- FACT: Improvement in students test scores is generally viewed to be the result of better teaching and higher levels of student learning.
- TRUTH: Improvement in student test scores typically mean that teachers and schools have done a better job preparing their students to memorize content and take standardized tests. This is typically at the expense of more authentic learning experiences which foster creativity, self-directed learning, and analytical and critical thinking.
- FACT: Schools with higher test scores are doing good things for kids. They have parents, teachers, and school leaders who care. They strive to meet the unique needs of each student in a safe, secure learning environment. The people who work here should be justifiably proud of their individual and collective impact on the lives of children.
- TRUTH: Schools with lower test scores are also doing good things for kids. They have parents, teachers, and school leaders who care. They strive to meet the unique needs of each student in a safe, secure learning environment. The people who work here should be justifiably proud of their individual and collective impact on the lives of children.
In summary, the current system of testing uses an autopsy of last year’s performance with no regard for methods, that is, how a level of performance was attained. It is exclusively outcome-based, namely test scores. Education is a mission-based endeavor; it cannot be defined solely by outcome measures.
More than anything else, these annual autopsies reveal no critical clues as to what we might do differently to keep our “patients” healthy.
As with real autopsies, the end result of this year’s autopsy of state test results for lower scoring schools will be a lot of finger-pointing and internal angst. The prescription for this year’s students will likely be more of the same, perhaps in higher doses.
More and less like this:
- More professional development and less autonomy.
- More data analysis and less planning of authentic learning activities.
- More labeling of kids and less recognition of students’ other talents.
- More pressure on teachers and less trust of teachers.
- More remediation and less recess.
- More test-prep and less emphasis on art and music classes.
- More busy work for kids and less engagement.
- More stress and less joy.
- More teacher vacancies and fewer qualified candidates willing to do the job.
None of this will help the patient get any better.
And, one year from now, we will be looking at yet another autopsy report telling us the patient has died … again.