Social Science For Fun And Profit

By Steve Sailer

12/09/2012

The field of social psychology was embarrassed recently when revelations respected, highly productive social psychologist Diederik Stapel was discovered to be simply making up data for his popular papers.

But you can still run experiments and cheat anyway.

Via Andrew Gelman:

The Data Vigilante

Students aren’t the only ones cheating — some professors are, too. Uri Simonsohn is out to bust them.

By CHRISTOPHER SHEA

… Simonsohn initially targeted not flagrant dishonesty, but loose methodology. In a paper called “False-Positive Psychology,” published in the prestigious journal Psychological Science, he and two colleagues — Leif Nelson, a professor at the University of California at Berkeley, and Wharton’s Joseph Simmons — showed that psychologists could all but guarantee an interesting research finding if they were creative enough with their statistics and procedures.

The three social psychologists set up a test experiment, then played by current academic methodologies and widely permissible statistical rules. By going on what amounted to a fishing expedition (that is, by recording many, many variables but reporting only the results that came out to their liking); by failing to establish in advance the number of human subjects in an experiment; and by analyzing the data as they went, so they could end the experiment when the results suited them, they produced a howler of a result, a truly absurd finding. They then ran a series of computer simulations using other experimental data to show that these methods could increase the odds of a false-positive result — a statistical fluke, basically — to nearly two-thirds.

One thing that’s interesting is how seldom these kind of data-mined false positives are published regarding The Gap, despite the huge incentives for somebody to come up with something reassuring about The Gap.

It’s easy to come up with Jonah Lehrer-ready false positives if you don’t care what your results are. Say you are having psych majors fill in Big Five personality questionnaires in four rooms: one is painted blue, one yellow, one light green, and one off-white. That gives you 5 personality traits times four rooms = 20 combos. It would hardly be surprising if one combination of room color and personality trait diverges enough to be statistically significant at the 95% level. (Especially if you can stop collecting data whenever you feel like.) People in yellow rooms are more neurotic! Or maybe they are less neurotic. Or maybe off-white rooms make people more conscientious. Or less. It doesn’t really matter. Jonah Lehrer would have blogged your paper whatever result you came up with.

On the other hand, if your goal is to close The Gap, it’s harder to stumble into a false positive by random luck because you know what you want ahead of time. You want to show you can close The Gap.

Thus, mostly we read about uncontrolled studies of Gap Closing: The Michelle Obama International Preparatory Academy of Entrepreneurial Opportunity Charter School, where black and Hispanic students are taught by Ivy League grads working 75 hours per week, had tests scores almost equal to the state average! (Don’t mention that 45% of the public school students in the state are NAMs. And don’t mention what white and Asian students would have done with those Ivy Leaguer teachers. There’s no control group in this pseudo-experiment.)

The most popular social science research cited on The Gap — stereotype effect — seems to be a combination of two things: the file drawer effect (there isn’t a big market for articles saying you couldn’t replicate a beloved finding) and the fact that it’s not really that hard to get black students not to work hard on meaningless tests.

< Previous

Next >


This is a content archive of VDARE.com, which Letitia James forced off of the Internet using lawfare.