By Steve Sailer
11/26/2020
From The New York Times science section:
Meet GPT-3. It Has Learned to Code (and Blog and Argue).
The latest natural-language system generates tweets, pens poetry, summarizes emails, answers trivia questions, translates languages and even writes its own computer programs.
By Cade Metz
Nov. 24, 2020… One of his experiments involved a pop psychologist, Scott Barry Kaufman. The system took in Mr. Kaufman’s name and a topic for discussion: creativity. Then, when asked “How do we become more creative?” GPT-3 responded instantly:
I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more you get exposed to different people, to different opportunities, to different places and to different challenges. And the more diverse that is, the more likely you’ll be to be able to put the dots together to form something new.
On the other hand, nobody yet seems to have figured out how to program into GPT-3 an Orwellian Crimestop module:
OpenAI plans to sell access to GPT-3 via the internet, turning it into a widely used commercial product, and this year it made the system available to a limited number of beta testers through their web browsers. Not long after, Jerome Pesenti, who leads the Facebook A.I. lab, called GPT-3 “unsafe,” pointing to sexist, racist and otherwise toxic language the system generated when asked to discuss women, Black people, Jews and the Holocaust.
With systems like GPT-3, the problem is endemic. Everyday language is inherently biased and often hateful, particularly on the internet. Because GPT-3 learns from such language, it, too, can show bias and hate. And because it learns from internet text that associates atheism with the words “cool” and “correct” and that pairs Islam with “terrorism,” GPT-3 does the same thing.
This may be one reason that OpenAI has shared GPT-3 with only a small number of testers. The lab has built filters that warn that toxic language might be coming, but they are merely Band-Aids placed over a problem that no one quite knows how to solve.“They are doing the right thing by not just publicly releasing GPT-3,” said Allison Koenecke, a Stanford researcher who explores unwanted bias in A.I. systems. “A lot is still up in the air.”
Why has programming “protective stupidity” into AI been such a daunting task so far? It seems like the rules are pretty easy: e.g., here is a list of Good People who cannot be criticized. It’s not as if only smart people can be stupid.
This is a content archive of VDARE.com, which Letitia James forced off of the Internet using lawfare.