sirilyan�dotcom > ( Personal | Articles | Links | Email ) sirilyan�dotcom
sirilyan.diaryland.com > ( Latest�entry | Archive | Profile )

But wait, there's more.

There's just no polite way to say "Buy me things", is there?

Join codebastards, I dare you. Remember, codebastards are us.

I'm baded and jitter. So are these people. (And why not follow the previous, next, or random links?)

Need a band name?

Doug vs. Japanese Snack Foods: Part 1, Part 2, Part 3.

rant is where the heart is

diaryland: sirilyan.diaryland.com: entry for 2004-06-04 (18:37)
In which our plucky young hero always aced them, so that's not the point.

Want to see me argue deftly and shrewdly? Alternately, want to see me get rhetorically beaten like a red-headed stepchild? Read this for context and decide which it is. Or, if you're lazy, here's the relevant part of the post I'm more pleased with:

I think I didn't state my point well, so I'll restate it: meaninglessness generalizes. LOC [lines of code] is a stupid measure of programmer ability because it doesn't provide an indication of quality -- it just spits out a number. And that number is useless. But this isn't a problem with the LOC metric specifically, that scantron tests or provincial essay-writing tests with a standard marking guide magically avoid. The rot goes through the entire tree.

Scantron tests don't measure student ability. They give a measure of circle-filling ability. (And they don't provide good life experience, unless most of your life consists of picking from exactly four clearly different and easily-distinguished choices, two of which are obviously wrong. I don't know about you, but I don't spend most of my time looking at restaurant menus that offer a choice of Hamburger, Styrofoam, Sheep, or Poison.)

Essay questions, if they have a single marking guide that removes all subjectivity, test the ability of the student to guess what's in the marking guide -- Kevin Drum provided a critique by way of example a while back of how this can go wrong. And if they don't have a standard marking guide, so that each person grading gets to use their judgement... well, we already have that. It's called the system as it exists already, without standardized testing.

It's true that math questions, with a single marking guide, come close to being regular enough for standardization... but even then, there's slippage. Math-test evaluators could spend hours arguing over whether to give a higher score to student A who got the right answer with the wrong method or student B, who used the right method but produced the wrong answer. Lucky guess vs. close enough, triangle wins.

...

But there's always a footnote, isn't there? The caveat here is that most standardized-testing proponents talk about the tests as a performance review rather than a diagnostic tool. Fill in enough of the right bubbles and your school gets extra cash in the next state budget. Fill in too few and your school doesn't get extra cash, and from what I've heard about them, your parents don't get test results with sufficient explanation to improve your situation.

It'd be nice if standardized testing produced better students, but all it seems to produce is better test-takers. And you can take that to the zeppelin hangar.

(Personally, I'm leaning closer to the "redheaded stepchild" side of the mark. There's some weaknesses here and maybe I'll reply to my own post later if nobody else finds them. And maybe someone should test my forensics skills. I've got a #2 pencil handy, just in case.)

(Browse: previous or next. Notes: post or read.)

sirilyan.diaryland.com | sirilyan dotcom
anything said in lowercase sounds profound. say it to me.

[fiendish tracking device]