By miller727@icloud.com May 5, 2014 Uncategorized 6 Comments

It’s been said often that a picture can be worth 1,000 words. I’m not sure how many words the following YouTube video is worth, but its silly portrayal of teaching consultants and value-added models (VAMs) is priceless!

In yesterday’s post, I shared my list of predictable consequences related to the implementation of the Oklahoma value-added models over the next few years.

The foremost concern at this time is that these models have simply not been shown to be statistically accurate or consistent measures of teacher effectiveness.

Many recent studies from nationally credible research institutes, including the American Statistical Association (ASA), caution against the use of VAMs for high stakes decisions because of their poor record of producing stable ratings of teachers. For example, different statistical models (all based on reasonable assumptions) yield different effectiveness scores. Researchers have found that how a teacher is rated changes from class to class, from year to year, and even from test to test.

If you have any doubt, take a look at this chart from New York.  The data was obtained by education blogger and author Gary Rubenstein. You can access his blog (HERE).

This scatterplot compares the performance of teachers in 2008 and 2009 based on student test scores (VAM).

Can you identify the “line of best fit?” According to Rubenstein, the correlation (Pearson r) between the two years, 2008 on the X-axis and 2009 on the Y-axis, was  +0.3.  For the non-statisticians out there, Pearson’s r  is a measure of the linear correlation (dependence) between two variables X and Y, giving a value between +1 and −1 inclusive, where 1 is total positive correlation, 0 is no correlation, and −1 is total negative correlation.

Typically, correlation coefficient values between approximately -0.3 and +0.3 account for less than 9 percent of the variance in the relationship between two variables, which might indicate a weak or non-existent relationship.

In short, Rubenstein’s analysis shows that a teacher teaching the same grade, the same course, with the ‘same students’ does not get consistent results. It is truly like weighing yourself on a scale, getting off the scale and then one second later getting on the same scale, and having your ‘weight’ change by twenty pounds.

According to the OSDE website, the new VAM for Oklahoma’s Teacher and Leader Effectiveness (TLE) system will supposedly control for the following variables:

  • Prior Achievement
  • Free/Reduced Lunch Status
  • Limited English Proficiency
  • Individualized Education Program (IEP)
  • Race/Ethnicity
  • Gender
  • Mobility
  • Prior Attendance

This means the “value added” by that teacher on the growth of student achievement on standardized tests over the course of year can be more accurately determined after statistically removing the influence of these outside factors.

Without getting too far into the weeds, these models seek to untangle overlapping influences. It might help to visualize overlapping circles on a Venn diagram. For example, we know that poverty status and ELL status are associated with lower academic outcomes. So in theory, the model should remove this association so that teachers aren’t unfairly penalized for student characteristics over which they have no control.

These models may eventually provide some valuable information for teachers, principals, and schools for improvement efforts. However, this type of data will only be useful after many years of longitudinal data are collected and analyzed.

Trying to tie these new, untested measures to important decisions about teachers and school leaders is inaccurate, unreliable, and unethical.

Take a minute to chew on this hypothetical:

If we accept the premise that value-added models are reliable measures of teacher effectiveness and can accurately identify and control for all factors affecting student achievement, let’s use them. But, not just for teachers and school leaders, let’s expand their use to give appropriate credit or blame to all involved.

Using the State Department’s Theory of Action, researchers would certainly agree that parent effectiveness is the single most important home-based factor for student academic achievement. Furthermore, do you believe that every child deserves to have an effective parent every year? Do you believe every child deserves to have a team of effective adults throughout his or her childhood? Do you believe that parental effectiveness can be developed? And, do you believe that parent growth can be best achieved through deliberate practice on specific knowledge and skills?

Therefore, why are we stopping with measuring teacher and leader effectiveness. If we have the data, let’s put it out there.

Don’t you think parents would like to have a complete report showing the value they have added to their child’s academic performance? If we are really serious about the transparent use of data to inform decision-making, let’s flip over all the cards and see what we have to play with.

Each year, we can ask parents to complete their own Roster Verification for Parents (RVP). For divided homes, each parent would calculate their involvement in their child’s educational progress. As with teachers, they could use the online single sign-on to register responses in the three columns: (1) I reared these children; (2) During these months; (3) For this percent (%) of parenting.

To calculate the percent of parenting over the course of the academic year, parents would use the words “all,” “most,” “some,” “not much,” and “that ain’t my kid” to determine their individual contribution to the child’s academic success.

After student test results were obtained, the state would plug in the factors listed above and print a complete analysis for every adult who made a contribution (added value) to a child.

Would this not be extremely useful for everyone? We could clearly see the impact of poverty, gender, mobility, attendance, English proficiency, and prior achievement on a student’s performance. Additionally, we would have real data to show that having an effective parent in every home is the most critical factor in a student’s success.

Like other states, we could publish all of this data in the local newspapers. Not only would the community have information about the quality of their teachers, but they would also know which parents were superior, highly effective, effective, needing improvement, or ineffective.

We would also use the state’s household rating system to assign every home an A-F grade based on their children’s performance on state testing. These grades would be posted on large signs on the front door so that others would have an easy-to-understand measure of that home’s effectiveness.

I’m certain that ineffective parents would accept the ratings as constructive feedback and be fully motivated to improve their scores for the next year.

As part of their mandated parental development plan (PDP), parents in need of improvement or receiving an ineffective rating would be required to attend classes to improve their effectiveness. These classes would be facilitated by parents who earned superior ratings, using the Pearson Guide to Effective Parenting Resource Guide. The guide will be written by David Coleman (and other people without kids) based on the National Common Core Parenting Standards (CCPS).

Of course, after three consecutive ineffective parent ratings, the state would have to step in. You know, that accountability stuff. The state would takeover the family structure and put a Household Improvement Plan (HIP) in place. In the interim, they would offer the children the chance to live with a for-profit charter family or a voucher to shop the market for a family of their choosing.

I think this is an idea worth trying, don’t you? We can’t sit back and just do nothing.

This Post Has Been Viewed 2,119 Times

Share this: