By miller727@icloud.com May 31, 2014 Uncategorized 23 Comments

To paraphrase Shakespeare, something is rotten in the offices of CTB/McGraw Hill and/or the Oklahoma State Department of Education.

A long-running joke I have with my staff is that the reason the OCCT results are often late or rife with errors is because they are graded by a band of Madagascar Ring-tailed Lemurs. I referred to these primates in a recent post, “It’s in the Lemurs’ Hands Now.”

After a recent review of CTB’s preliminary results provided to schools last week, I’m starting to think this may not be far off.

Thank you to several readers who alerted me to some strange issues related to the 5th and 8th grade Writing Tests. These tests were administered in late February so CTB has had three months to score these assessments.

It started with this comment from a Norman teacher on my Friday post.

Any thoughts on the ridiculous preliminary writing test scores from CTB??

I responded that I had not had a chance to look at the scores and asked other readers to comment. The floodgates opened.

I too was hoping you were about to blow this writing test debacle out of the water. What we are seeing is that a large percentage of our best students were LK or UN. We have their samples on a disk. Many are very good. I called our DTC and right away he knew why I was calling. He said it was revealed to all DTCs this week that a different rubric was used that severely punishes students who cite their evidence. My teachers attended a writing workshop put on by the Reach coaches that told us exactly how to teach them to cite their evidence. So, that is how we taught them. Additionally, many of my LK students scored a 35. Passing is a 36. I question whether it is mathematically possible to score a 36. I haven’t found anyone yet that scored a 36 on the nose. Something fishy is going on here!

Writing scores were dismal for us with the same issues as above. In our school 34 students took the test and only 9 passed. 20 scored LK with 15 of those getting a score of 35. On all but 1 student, the scores for each area are exactly the same. It they got a 2.0 in one area, every area was a 2.0. Discouraging to say the least!

Fifth writing scores are a joke! Our DTC called CTB to request grading some of ours again was told they would be happy to! But if their score changes it will be free, if it doesn’t change we will be charged $125 a test. I would bet many wouldn’t change. He also ask for the rubric used and was denied! Surprise, surprise, surprise!

Please let us know what you discover about the writing test scores. Some of my top students that read and write above grade level received lower scores than my lowest students that read and write below grade level. Also, some of my best language students who have excellent spelling and grammar received 1’s and 2’s on sentences and paragraphing and grammar, usage, and mechanics. Their score was the same as my low kids who can’t spell, do not use end marks, and write everything as one big sentence. I used a common core writing book and samples provided by the SDE which shows them how to cite evidence. Even the instructions on the test tell them to cite evidence. It appears to me that the students who followed instructions received lower or equal scores to the students who did not follow instructions. I guess this is part of “reforming public education”.

Ours (Moore) were bad, inexplicably bad. For the vast majority of the student responses, scores were the same for each of the five traits. If a student received 2.0 for Ideas and Development, he/she also received a 2.0 for each of the other four traits. We don’t feel the rubric was applied correctly. We also feel the $125 fee to re-score the tests is outrageous – more of a deterrent than anything else.

I also noticed that most students received the same score for each trait. One of my gifted students and best writers received a 25 while one of my LD students who reads and writes at a 2nd grade level received a 28.

Overall, Moore schools were unexpectedly poor. We saw that many at our school scored LK with a score of 35 while the passing score was a 36. We have yet to figure out how you would receive a 36. We heard the most common mistakes were plagiarism, however, we were trained to train our students to say “according to the text…” The author said….” And then quote from the reference pieces to which the students were to refer. Also, close reading, a big part of Common Core, encourages referring back to the test and properly citing the text. Most 5th graders used this skill and are now being told that they plagiarized.

Let me start by sharing that the writing scores at Jenks Middle School were also quite poor. We expected a drop with the introduction of the new, “more rigorous,” CCSS rubric, but not to this extent. Out of 755 tests administered, only 470 of our students earned a score of proficient or advanced, a passing rate of only 62.3%. Less than 20% of students on IEPs and English language learners were able to earn a passing score.

There do seem to be some real concerns with the grading scale. Several comments refer to the fact that there were many scores of 35 but very few of 36 (passing score).

There are five areas scored on the writing rubric. Both the fifth and eighth grade rubrics for the “transitional CCSS writing test” include the following scored standards. The scoring “weights” for each standard are also listed. I will come back to this in a minute because this is where things start to get fishy.

Ideas and Development—30%

Organization, Unity, and Coherence—25%

Word Choice—15%

Sentences and Paragraphs—15%

Grammar and Usage and Mechanics—15%

Both writing rubrics are on the OSDE website and can be viewed (5th) HERE and (8th) HERE.

Let’s get back to the scoring. Each of the five standards is graded on a scale of 1.0 to 4.0 in 0.5 increments. Again, using the 755 scores I have to review, I will show you how the scores for the 8th grade test break down at my school. The lowest score possible is a 15 and the highest score is a 60.

At first glance, it appears that the scores are derived by combining the point totals from each standard and multiplying by three. I have bolded those scores where this rule seems to apply. It is also evident that this is not always the case.

Total score:

5 = 15

5.5 = 24

6.5 = 25

7.5 = 29

8.5, 9.0, or 9.5 = 30

10.0 = 32

10.5 = 35

11.0 = 35 or 36 (36 is proficient score)

11.5, 12.0 = 36

12.5 = 38

13.0 = 37 (only one of these)

13.5 = 41 or 42

14.0 = 41 or 42

15 = 45

16 = 47

16.5 = 48

17.5 = 52

18.0 = 54

19.5 = 56

20 = 60

It is obvious from this chart is that the weights discussed above WERE NOT USED, or were used haphazardly. Any score of 12.0 earned a 36 regardless of how the scoring was distributed. Yet in one case a score of 11.0 earned a passing score of 36 with individual standard scores of 3/2/2/2/2 while another 11 (2/3/2/2/2) scored a limited knowledge score of 35.

However, a 10 always earns a 32, a 15 always earns a 45, and so on for most of the scores. The only exceptions were for scores of 11.0 (35 or 36), 13.5 (41 or 42), and 14.0 (also 41 or 42).

Also note that the odd fact that a score of 7.5 earns a 29 while a 8.5, 9.0, or 9.5 only earns one more point (30). Suffice it to say, this doesn’t seem to make much sense.

My school did have 17 students earn a score of 36. Yet, this represents only 2.3 percent of all students tested.

The biggest issue discussed in the comments related to the fact that most of the scores were the same for each standard. As mentioned above, if a student scored a 2.0 for Ideas and Development, he/she also typically received a 2.0 for each of the other four traits. This clearly does not pass the smell test.

Looking once again at the data from Jenks Middle School, we had an incredible 613 out of 755 scores (81.2%) that had the exact same score for every standard on the rubric.

Here is one page out of many in my school report in which EVERY student on the page earned the same individual score for every standard of the rubric.

How can this possibly be accurate? The standards measure completely different writing skills. Many students may have good ideas and organization but limited skills in grammar and word choice or vice-versa.

The fact that the vast majority of students earned the same score across the board reflects shabby, lazy, and inaccurate grading on the part of CTB.

We know that several other testing vendors utilize temporary staff to grade these types of assessments. Is it possible that CTB figured, “Hey, it’s our last year with Oklahoma. Why waste a bunch of time doing a good job on these? The money is already in the bank.” Maybe this is an example of the high quality machine grading we have been warned about.

Or perhaps the State Department knew this was the last year for the separate writing test and didn’t really care how accurate the results were.

But we do care! These scores will be used as part of our A-F reports cards and therefore can have a significant effect on schools. If we must be subjected to these ridiculous tests, we should at least be fairly evaluated. Our state has paid CTB millions of taxpayer dollars for accurate reporting of test results and we are clearly not getting what we paid for!

Not only that, CTB is trying to milk another $125 from schools to reassess any questionable scores. If the score does change the money is refunded. If not, CTB keeps it. How many districts have the resources to ask for large numbers of these tests to be regraded (and risk losing the money)? Granted, this has been the rescoring policy for several years, but with this number and  type of obvious discrepancies, Superintendent Barresi and the State Department should be demanding that ALL of the tests be rescored, at NO cost to Oklahoma taxpayers!

I have not even touched on several of the other issues communicated in the comments above related to how students were assessed for citing evidence and discrepancies between student scores and their typical class performance. These issues must also be fully investigated to ensure that CTB has followed the OSDE assessment guidelines accurately. All superintendents and test coordinators in Oklahoma should review their district’s scores and make their voice heard at the State Department. This type of unsatisfactory performance cannot be tolerated. Yet, we continue to let these testing companies off the hook.

We are getting ripped off and the lemurs are laughing all the way to the bank.

This Post Has Been Viewed 13,879 Times

Share this: