According to Wikipedia, the modern-day repository of all earthly knowledge, a vampire is defined as “a mythical being who subsists by feeding on the life essence (generally in the form of blood) of living creatures.” Some vampires can allegedly shape shift, and lure their unsuspecting victims often by appearing to be charming and attractive before striking.
Similarly, Value Added Models (VAMs) are a real world beast that subsists by feeding on the life essence (generally in the form of morale and institutional culture) of public schools.
VAMs have been forced upon American public schools through the Federal Race to the Top (RTTT) grants and state ESEA waiver requests. They have been cleverly disguised as an attractive, reasonable-sounding approach to help teachers and administrators use student achievement results to inform instruction. If this was the only purpose, we might be inclined to let them live. It is not.
I have written extensively about VAMs in previous posts, including this 2400 word missive, “Why VAM must Die,” from last May. If you are not familiar with the “science” and research behind value added models, this would be a good place to start.
The true goal of VAMs is to provide a tool for education reformers to prove the existence of millions of horrible educators in public schools across America.
The reformers know (because, you know, they just know) that there are a whole ton of terrible teachers out there. In their mind, the old evaluation systems didn’t reveal the existence of these awful educators, so the old system must not have worked. Thus, they must look for a new system that does work, and they will have proof that it works when it confirms their belief that a huge number of American school teachers suck. VAM may not work any better than tarot cards, palm reading or tea leaves, but if it tells them that many teachers suck, and that’s good enough for them and certainly an improvement of the old system.
Value added models are the next step in the reformers playbook-one that has as its objective the dismantling of teacher unions and the disparagement of professional educators. They provide the next chapter in the narrative that American public schools are failing, and failing because teachers and school leaders are lousy and don’t care about children.
Once these VAMs have successfully infiltrated the fabric of schools, they will serve to leach the joy out of teaching by using inaccurate and unreliable data to sort, rank, and punish educators. Instead of promoting teacher collaboration in the best interests of students, VAMs and their associated student growth metrics will introduce self-serving competition, teaching to the test, and continued loss of student engagement.
In case you are not aware, Oklahoma is in the final stages of implementation of the state’s Teacher and Leader Effectiveness (TLE) Evaluation system. The TLE legislation, passed into law back in 2010, requires districts to base half of teacher and school leader evaluations on classroom observations (qualitative) and the other half on “multiple measures of student achievement (quantitative).”
The entire evaluation system depends on a so-called “mythology of objectivity.” This is the idea that we can quantify everything, come up with the perfect formula, and reduce all aspects of teaching to numbers that will not lie – after all, they are numbers.
But even if we assume for the moment that those high-stakes-tests our children are taking yield legitimate results, there are still serious problems with using those tests to evaluate teaching. First, they were only designed to measure student achievement – not how well our teachers are teaching. As any scientist will tell you, when you want to examine something, the measurements have to be designed to actually look at what you’re interested in. And second, they completely omit many of the most important elements of teaching – you know, those very things we as parents and concerned community members think about when we recall our very best teachers.
I recently saw the negative effect of VAMs at my own school. I watched the blood drain from the face of one of my best teachers when I shared with her the VAM score computed by the OSDE using student test results from the 2012-2013 school year. Despite the fact that this educator had over 95 percent of her students pass the algebra I end-of-instruction (EOI) test, her assigned VAM score was a ridiculous 2.3 on a five point scale. In short, one of my more effective and highly requested educators in my building was given a rating of “needs improvement” from the state department!
The reason for this low rating was easy to see. At Jenks Middle School, the majority of seventh grade students are enrolled in prealgebra, an eighth grade math course. Yet, by law, these students take the 7th grade math OCCT instead of the 8th grade prealgebra test at the end of the year. Since these students are advanced by one year, they tend to do well on the 7th grade math test, with the majority scoring advanced.
About two out of three of our students move on to take Algebra I or higher math course in eighth grade. A prerequisite for students to take algebra at our school is that they score advanced on the 7th grade math OCCT.
This sets up a scenario where students’ algebra I scores are compared to their 7th grade math scores. However, as I have explained, these students have skipped a year of math. As a result, a high number of the students who earned a very high 800+ OPI (Oklahoma Performance Index) in seventh grade may earn a significantly lower score on the Algebra I EOI the next year (but still pass). Incidentally, the average score of this teacher’s students on the Algebra EOI exam was an incredible 763.8 (700 is passing).
By comparing our seventh grade students who earned high seventh grade math scores to other students in the state with similar scores is NOT accurate because many of these other students were enrolled in prealgebra their 8th grade year and were able to keep their scores higher. If they had also been enrolled in algebra I, their scores likely would have fallen as well.
The bottom line is that because of a school-based decision to place students in a higher level math course, my eighth grade algebra teachers are penalized with low VAM scores.
So, as a principal, do I continue a practice that will negatively impact my teachers (advancing students to higher math courses), or should we just keep our students on grade level. Our rationale for giving students the opportunity to take algebra and geometry in middle school was to provide a higher level of rigor and the chance to advance through Calculus in high school. However, if we were motivated by high VAM scores, we would abandon this initiative. Of course, we are not going to do this because it would be negative for students.
Another example of the negative effect of VAM on my school is teachers potentially “teaching to the test” rather than teaching the broader curriculum.
As I said, the majority of our seventh grade students are enrolled in pre-algebra. The state standards for each of these courses are obviously different. By not allowing our students to take the appropriate level math OCCT, the state incentivizes my teachers to teach to the 7th grade test (using the 7th grade standards), rather than teach the prealgebra curriculum needed to prepare students for algebra I the next year. One of my teachers did this very thing and earned a very high VAM score. At the same time, her students were less prepared for algebra I than students from other prealgebra classes.
By focusing almost exclusively on preparing her prealgebra students for the seventh grade math test, this teacher unintentionally set her eighth grade colleagues up for failure. They will enter their algebra classes with very high 7th grade test scores yet will likely score much lower on the algebra EOI due to their limited background knowledge in prealgebra.
Again, this places me in a dilemma. This teacher had 98 percent of her students pass the 7th grade OCCT. Do I congratulate her for her students’ outstanding pass rate or do I admonish her for failing to adequately prepare students for algebra I in eighth grade? These are the types of scenarios that are created by an overemphasis on high stakes testing and a lack of flexibility in the Oklahoma state testing program. It also begs the question: Why don’t schools have the flexibility to give the proper math test to our students?
Another way that VAM will suck the life out of my teachers is by setting up two completely different evaluation systems.
Math and language arts teachers in grades 4 through 8, plus teachers of Algebra I, Algebra II, Geometry, and English III are the only teachers who will earn a VAM score. All other teachers will have their student academic growth (SAG) calculated by completing what the state refers to as a Student Learning Objective (SLO) or Student Outcome Objective (SOO).
The slide below is from the state department presentation during the Vision 2020 conference this past summer.
This gets complicated. If you really want to learn more about this process, you can access the OSDE links HERE.
Essentially, the SLO process goes like this. A teacher or group of teachers decide on a set of knowledge or skills they want their students to attain. They then conduct some sort of pretesting or data review to establish a baseline. Using this information, they set “rigorous yet reasonable” growth targets for their students. As the end of the instructional period (could be a full year, a semester, a quarter, or foreseeably, even one unit), the teachers conduct a post assessment. This assessment could be a test, a student portfolio, project, essay, or about anything else the teacher(s) deems appropriate.
The table below shows how this might look.
[table id=5 /]
Based on student scores on a pretest, the teacher establishes growth targets that his or her students must meet. Where do these growth goals come from? They are simply plucked from the air with seemingly no historical basis. According to this example from the OSDE, any student who scored between 41 and 70 will have to earn an 80 on the post assessment to show adequate growth (and earn a point for the teacher). Ultimately, teachers are incentivized to have a large number of students reach their growth goals. Why, because of this criteria set by the state department.
[table id=4 /]
In order to earn a 5.0, a teacher needs to have 90% of his or her students reach the growth goal that the teacher set themselves. Just like setting cut scores for state testing, this process is highly susceptible to manipulation. As a teacher, I can simply set my growth goals lower to earn a higher score. If my administrator does not allow me to do this, I can just teach to the test by providing my students with “highly detailed” study guides for the post assessment. I can also encourage student to blow off the pretest (“Just fill in the bubbles, kids–it’s not for a grade anyway”), while simultaneously making the final exam important to their grade.
Some of you may be thinking, “C’mon, Rob, do you really think that teachers would intentionally play the system to earn a higher score?” If this was not a part of their formal evaluation, I would hope not. However, by making this a 35% component of their evaluation, it almost guarantees that some teachers will do what they think they need to in order to keep their job.
This is an obvious implication of implementing this type of system. So, what do the creators of this nonsense advise states and districts to do if teachers and administrators do not take this process seriously.
This document from the OSDE website was written by an entity called the Reform Support Network. It provides some recommended guidance on what states might do if teachers attempt to game the system and set academic growth goals too low in order to earn higher SLO scores.
Although the development of SLOs is typically a collaborative process, States and districts must set policies for who has final approval of an SLO and will be held accountable for its quality. In Rhode Island, administrators must certify SLOs, attesting to their quality. In Georgia, the Department of Education must approve all SLOs. Finally, the quality of SLOs developed by teachers in a school can be included as a performance measure in principal evaluations.
Allow me to translate. If the state believes that some administrators are allowing teachers to set “low quality” SLOs, some possible remedies are to hijack the process (Georgia) or even count them against the administrator. Subsequently, if my teachers’ SLO scores are too high, my evaluation could be negatively impacted. Again, this sets up a scenario where I am competing against my teachers. I can force them to set higher goals for their SLOs, which will lower their scores, but increase mine. This will certainly do wonders for building a climate of trust, respect, and collective efficacy in my school–NOT!
What is the wonderful research that the OSDE provides to justify this new SLO/SOO process? Take a look at this slide from the Vision 2020 presentation.
This is outrageous! What they are saying is that “we have seen some positive things, but also some negative things, so we are really not sure at this time.” Yet, we have no problem inflicting this unscientific and inaccurate process upon our educators. But, again, the biggest issue is that some teachers are going to be evaluated by VAM scores for which they have little control, while the majority of educators will be able to design their own evaluation instrument and measure their own progress. This is fundamentally unfair.
You think we have a teacher shortage in Oklahoma now? Wait a few years!
Along these same lines, Tulsa World journalist, Andrea Eger, published a very revealing article in today’s paper detailing Tulsa Public School’s use of student surveys as one component of teacher evaluations. While I have not studied their system, I cannot imagine using feedback from kindergarten students as a significant part of a teacher’s evaluation. Likewise, we all know how seriously secondary students take these types of surveys. From the perspective of students, if you are a middle school teacher who does not assign homework, gives out As and Bs, and allows us to listen to music on our phones during class, you are likely to earn some good marks. Conversely, if your class is too hard, you give us too much work, and you don’t let us text in class, we might have to punish you with low ratings.
I do think that parent and student surveys can provide useful information for educators and support authentic conversations between teachers and their administrators. That being said, I do not believe they should be used as a metric for evaluating teacher effectiveness.
So, I suppose I need to bring this to a close! At this point your head may be spinning. What’s the big deal? Why should we care?
If VAM is a sham, why are we wasting our time – and untold taxpayer dollars – on this stuff? VAM is garbage in, garbage out. There’s no research that shows a way to accurately and reliably account for out of school factors. This is all in the experimental phase. No one has done it. Research from other states has shown teachers who getting bad VAM scores can be the very ones who get the highest ratings from parents, those who inspire kids and are most humane.
The take away is this: we are wasting precious resources on a system that will not give us good results, resources that we know would be far better spent on early childhood education, or even textbooks and technology for our schools.
I would like to see our elected leaders have a real conversation about the impact of this legislation during the upcoming legislative session. They need to ask principals and teachers. If they don’t ask, we need to tell them. It is time for all of us to shine a bright light on the potential damage about to be inflicted on our schools by this VAM beast. We need to grow a backbone and begin to speak out. Enough is enough. These high stakes tests – and the VAM sham they perpetuate – are damaging our schools, our kids, and our teachers.
We must drive a stake through the VAM’s heart and kill it.