I know. I am mixing my metaphors but the more I look at the results from this year’s state writing tests, the more convinced I am that we have another SNAFU on our hands.

It is becoming clear to most people outside of the State Department of Education that something went terribly wrong with the grading process for this year’s 5th and 8th grade tests. As okeducationtruths and I both discussed in posts this weekend, one significant question is how so many students earned the same score in all five elements of the scoring rubric. As I shared on Saturday, over 81% of the 755 students at Jenks Middle School were scored this way. This percentage seems extraordinarily high.

Another concern was how the scoring rubric was used to compute the composite scores. According to the OSDE website, both the 5th and 8th grade writing assessments are supposed to be graded using the same weighted rubric shown below.

**Ideas and Development (ID) = 30%
Organization, Unity, and Coherence (OUC) = 25%
Word Choice (WC) = 15%
Sentences and Paragraphs (SP) = 15%
Grammar, Usage, and Mechanics (GUM) = 15%**

Oktruths shared some great charts tonight showing how some of these scores simply do not make sense. I want to add some more detail using the actual scoring formula provided by Amy Nicar, the OSDE’s English Language Arts Assessment Specialist. This was sent in response to an administrator’s email from another district after she wrote to ask how the composite scores were calculated.

The composite score (CS) is calculated as a weighted composite of the average of two independent ratings for each of the five analytic traits:

CS = 15(0.30ID + 0.25OUC + 0.15WC + 0.15SP + 0.15GUM)

Okay, back to the previous point. The fact that **TWO independent raters** could rate 81% of my school’s writing tests in precisely the same way in every category is beyond belief. But, let’s move on.

For those readers who might be “math averse,” hang in with me for just a minute. This formula is actually very straight-forward. The equation simply takes each variable from the rubric and multiplies it by the appropriate weight. These individual factors are then combined and the total is multiplied by 15 to determine a composite score.

Therefore, it is easy to see how a student with scores of 2.0 in each of the five standards should earn a composite of 30. You can either distribute the 2.0 across all factors or take it one step at a time.

CS = 15(0.3*2 + 0.25*2 + 0.15*2 + 0.15*2 + 0.15*2)

CS = 15 (0.6 + 0.5 + 0.3 +0.3 + 0.3)

CS = 15 (2.0) = 30

Makes sense, right? **Then why does every student score of 10.0 (all 2.0′s)** **earn a 32**, and not a 30 on the 8th grade writing test? Math is math. There is no way to compute a 32 using all scores of 2.0.

I decided to look at two other key numbers: **48 and 49**. These are important numbers because they are the highest writing scores that earn a proficient rating. A score of 50 earns an **advanced** rating.

This doesn’t affect many students at my school, but I use these numbers to further illustrate that the formula shared by Ms. Nicar was NOT used consistently or accurately by CTB.

**Example #1:** CS = 48 (3 students)

ID 3.0, OUC 3.5, WC 3.0, SP 3.5, GUM 3.5

CS = 15(0.3*3.0 + 0.25*3.5 + 0.15*3.0 + 0.15*3.5 + 0.15*3.5)

CS = 15 (0.9 + 0.875 + 0.45 +0.525 + 0.525)

CS = 15 (3.275) = 49.125

**Example #2**: CS = 48 (2 students)

ID 3.5, OUC 3.0, WC 3.0, SP 3.5, GUM 3.5

CS = 15(0.3*3.5 + 0.25*3.0 + 0.15*3.0 + 0.15*3.5 + 0.15*3.5)

CS = 15 (1.05 + 0.75 + 0.45 +0.525 + 0.525)

CS = 15 (3.3) = 49.5

**Example #3**: CS = 48 (3 students)

ID 3.5, OUC 3.5, WC 3.5, SP 3.0, GUM 3.0

CS = 15(0.3*3.5 + 0.25*3.5 + 0.15*3.5 + 0.15*3.0 + 0.15*3.0)

CS = 15 (1.05 + .875 + .525 + .45 + .45)

CS = 15 (3.35) = 50.25

**Example #4**: CS = 48 (4 students)

ID 3.0, OUC 3.5, WC 3.5, SP 3.5, GUM 3.5

CS = 15(0.3*3.0 + 0.25*3.5 + 0.15*3.5 + 0.15*3.5 + 0.15*3.5)

CS = 15 (0.9 + 0.875 + 0.525 +0.525 + 0.525)

CS = 15 (3.35) = 50.25

**Example #5**: CS = 49 (2 students)

ID 3.5, OUC 4.0, WC 3.5, SP 3.0, GUM 3.0

CS = 15(0.3*3.5 + 0.25*4.0 + 0.15*3.5 + 0.15*3.0 + 0.15*3.0)

CS = 15 (1.05 + 1.0 + 0.525 +0.45 + 0.45)

CS = 15 (3.475) = 52.125

Here is the bottom line. Out of 12 scores of 48 reported for students at Jenks Middle School, NONE of them actually computes to a 48 using the OSDE formula! In fact, seven of these 12 students should have earned an advanced score (50.25) and two more should round up from 49.5.

Of the two students who earned a 49, their actual score should be three points higher at 52.125. This would mean that 11 students with advanced scores have been incorrectly scored as proficient at my school alone.

To be fair, the errors go both ways. Here is a student who scored a passing score of **36.0** who should have earned a limited knowledge.

ID 2.5, OUC 2.0, WC 2.5, SP 2.0, GUM 2.5

CS = 15(0.3*2.5 + 0.25*2.0 + 0.15*2.5 + 0.15*2.0 + 0.15*2.5)

CS = 15 (0.75 + 0.5 + 0.375 +0.3 + 0.375)

CS = 15 (2.3) = 34.5 (limited knowledge)

The central question is how can we trust this data? These are not complicated algorithms. Not only was the writing rubric used inaccurately by the CTB scorers, the calculations based on these numbers are rife with errors as well.

Why is the State Department so reticent to admit there are problems with these results? I assume they have smart people with calculators.

The reality is that the last thing Dr. Barresi wants three weeks from election day is another CTB testing SNAFU on her watch. Instead of advocating for teachers and students, she is more concerned about votes. If mistakes were made, they should be investigated and corrected.

Instead, we get the typical dodge and cover. In his response to the Tulsa World Monday, Phil Bacharach stated that the OSDE stood by the scores: “If schools have what they say are ‘clear-cut’ examples of inaccurate scoring, they should bring those materials to the attention of (the state Department of Education) so we can evaluate.”

I just did, Phil. What now?