VDARE-test_1_

PISA Discovering That Accuracy = Boredom To The Press

By Steve Sailer

12/05/2013

Three years ago, Andreas Schleicher and the other well-funded folks at PISA were media darlings. This year … not so much. You can sense that the bloom is off the rose.

A big part of PISA’s new PR problem is that the results were so similar from 2009 to 2012. Now, you might think that stability is a good sign that suggests that the PISA people aren’t just pulling these numbers out of thin air. But accuracy is boring. The media likes change for the sake of change. Who’s up? Who’s down? A school test that’s more or less a giant budget IQ test doesn’t produce enough random changes to maintain media interest.

Decades ago when the news magazine US News & World Report was launching their college ranking system, there was much interest from year to year as they improved their methodology, frequently casting overlooked colleges toward the top. But, after awhile, USNWR got pretty good at measuring as much as could be conveniently measured … and then what? Colleges, it turns out, don’t change much from year to year, so the future looked a lot like the present. And without trends, we don’t have news.

So, USNWR came up with the idea of changing some of the fairly arbitrary weights in its formula each year to generate a new #1 frequently. One year, for example, Caltech shot up to #1, which generated a lot of press coverage. But it was almost all just churn for the sake of churn. Caltech was pretty much the same place before, during, and after its sudden rise and fall.

But spectators like churn. In fact, one side effect of bad quantitative methodologies is that they generate phantom churn, which keeps customers interested. For instance, the marketing research company I worked for made two massive breakthroughs in the 1980s to dramatically more accurate methodologies in the consumer packaged goods sector. Before we put to use checkout scanner data, market research companies were reporting a lot of Kentucky windage. In contrast, we reported actual sales in vast detail. Clients were wildly excited … for a few years. And then they got kind of bored.

You see, our competitors had previously reported all sorts of exciting stuff to clients: For example, back in the 1970s they'd say: of the two new commercials you are considering, our proprietary methodology demonstrates that Commercial A will increase sales by 30% while Commercial B will decrease sales by 20%.

Wow.

We'd report in the 1980s: In a one year test of identically matched panels of 5,000 households in Eau Claire and Pittsfield, neither new commercial A nor B was associated with a statistically significant increase in sales of Charmin versus the matched control group that saw the same old Mr. Whipple commercial you've been showing for five years. If you don’t believe us, we'll send you all the data tapes and you can look for yourselves.

Ho-hum.

It was pretty amazing that we could turn the real world into a giant laboratory (and this was 30 years ago). But after a few years, all this accuracy and realism got boring.

It turned out that clients kind of liked it back in the bad old days when market research firms held a wet finger up to the breeze and from that divined that their client was a creative genius whose new ad would revolutionize the toilet paper business forever. (New ads and bigger budgets mostly work only if your ad has some actual message of value to the consumers to convey: e.g., "Crest now comes with Unobtanium, which the American Dental Association endorses for fighting Tooth Scuzz.")

These parallels between the consumer packaged goods industry in the 1980s and the educational reform industry in the 2010s are not really coincidental. Everybody says they want better tests, but what they really want is more congenial results. So, when they get better tests, they aren’t as happy as they thought they'd be.

< Previous

Next >


This is a content archive of VDARE.com, which Letitia James forced off of the Internet using lawfare.