I thought I would toss together a little acronym salad to enjoy with your evening meal.
If you don’t find it appetizing, just sprinkle on a little Hidden Valley Ranch, because…it goes with everything!
(For any Hidden Valley Ranch executives reading this, you’re welcome! Just shoot me an email and I’ll tell you where to send the check.)
Anyway, grab a fork and let’s dig in! (unless you’re prone to driving forks or other sharp objects into your head when you read stories about test-based idiocy–in this case, put the fork well out of reach before proceeding)
If you have anything to do with state testing of students in grades six and above, you’ve likely been made aware of the following guidance emailed to District Test Coordinators Monday.
ELA Equivalent
Information was shared last week related to paired passages on the ELA online assessments. These tests are the grades 6, 7, and 8 Reading test, and the English II and III End-of-Instruction exams (EOIs). Students should click on the tab labeled 2 at the top of the page to see the second passage. Since many students had already taken their ELA test before this information was shared, if the student feels knowing this would make a difference on their performance, the student may take the Equivalent test. The district should obtain the parent’s agreement, and then invalidate the operational test. An Equivalent version will be made available for the student to test. Once the operational test is invalidated, the score on the Equivalent test will stand as the student’s score.
Allow me to chew up this piece of testing salad for you and regurgitate it in digestible form (okay, this metaphor is getting a little gross, so let me just get to the point).
Here’s what this guidance really says:
The testing vendor, Measured Progress, made a big OOPS and neglected to include directions for students on how to navigate between various passages on this year’s reading tests. This includes the grades six through eight tests and the English II and III EOIs.
As a result, some students were unable to access important information they needed to answer the questions.
Of course, students couldn’t ask their teacher for clarification because teachers are not allowed to deviate from what is published in the Test Administrator’s Manual (TAM). Plus, the act of looking at a student’s computer screen is forbidden by the state and can result in losing one’s teaching certificate.
Therefore, many students simply made their best guess on some of the questions. And some of these students are apparently bad guessers. As a result, a good number of students scored lower than they might have if they had all of the needed instructions. More importantly, many failed.
To correct for this problem, the state is allowing students (with parent agreement) to have their first test invalidated and are being given the chance to take an equivalent test to try to earn a higher score. Of course, only “if the student feels knowing this would make a difference on their performance.”
Well, I guarantee if I failed one of these tests, I am definitely feeling it!
Passage of the English II EOI assessments is needed to earn a diploma. Other students must earn a passing score on the 8th grade reading test to get a driver’s license. Failing any of the other reading tests may cause other students to be assigned remediation in lieu of an elective. In short, these are HIGH STAKES tests—for students, for teachers, and for schools.
At the same time, a student who earned a passing score, even by a point, would have no incentive to retake the test. Once the decision is made to invalidate the first test, that score is gone, which means that there is nothing to gain and everything to lose by taking the test again.
On the other hand, at many schools, I expect we will have a large number of students who failed the first time that will make the choice to take the test again. This is the right thing to do to correct for a mistake made by the testing vendor. For this reason, I applaud the state department for making this option available.
But here’s a question I am now pondering. And I believe the answer is going to be VERY important.
What does this situation mean for the Value-Added Models being developed for Language Arts teachers this school year?
Unless the implementation of TLE Quantitative Measures is delayed by the legislature in the next few weeks, these VAM scores will count for 35 percent of these teachers’ evaluations.
With this recent decision and guidance, the ELA VAMs appear to have just been made INVALID!
In short, the formulas may now be “No Value Models” (NVMs).
Allow me to explain.
Let’s consider an eighth grade student (Sam) who scored Advanced last year with a score of 800 on his reading test. This year, because of the missing instructions, he only scored 700–barely passing–but still a proficient score. This is an example of a student who would have zero incentive to take the test again. His scores will likely stand as they are.
However, when his English teacher, Ms. Jones, receives her VAM score for Sam next fall, it will show a 100 point drop in his score from one grade to the next. Her evaluation will then suffer, not because she did a poor job, but because the testing vendor did a poor job in preparing the directions for this test. This would be unfair and inaccurate.
Moreover, the state’s value added model uses a “typical-peer comparison” to compare how students in one school compare to demographically similar students in other schools who scored at the same level the year before.
Using this information, the formula claims to account for any outside factors which affect student achievement to compute a statistically valid Student Academic Growth (SAG) factor.
In theory, the goal is to compare “apples-to-apples” so that accurate attribution can be assigned to each teacher.
If that is confusing, maybe this simple formula might help:
On second thought, maybe not!
Another thought. How can we trust comparisons derived from students taking two different tests?
Student A and student B are from different districts. One district allows students to take advantage of the opportunity to take the equivalent test. Another district decides it’s not worth the trouble. If student A and student B both scored 750 on their reading tests last year, but student A scores higher this year because she took the equivalent test, while student B kept his score on the primary test, is this still a fair and valid comparison?
And, if this year’s VAM scores are unreliable, how will next year’s scores be any better since they will be compared to this year’s?
It doesn’t stop with VAM scores. Another possible implication is the impact on next year’s A-F school report cards.
The overall school performance calculations (50%) of the A-F formula may be lower than last year depending on how many students choose to retake the tests. Just as student growth affects a teacher’s value-added score, it also plays a part in calculations of the school’s growth components.
I also suspect that this state department guidance may not be implemented uniformly from one district to another across our state. And, since teacher evaluations and school A-F report card grades are now dependent on comparisons between students in these districts, will this not affect the reliability and validity of these measures?
If you recall, last fall, our previous state superintendent decided to remove the results from last year’s 5th and 8th grade writing tests from the A-F calculations because of challenges to their validity.
Therefore, will the OSDE need to similarly consider omitting the results of this year’s reading tests and English II and III EOIs from next year’s A-F grade calculations?
At this point, I am just thinking out loud. But I believe these are serious questions that need to be seriously evaluated.
A final thought. Have I mentioned recently how absolutely asinine all of testing nonsense is? WTH!
Wouldn’t it be nice to just skip the testing acronym salad and get to the main course: teaching children?