IMG_2774

NYT: The Racist Robot Crisis Is A Billion Dollar Opportunity For Wokesters

By Steve Sailer

11/15/2019

From The New York Times:

We Teach A.I. Systems Everything, Including Our Biases

Researchers say computer systems are learning from lots and lots of digitized books and news articles that could bake old attitudes into new technology.

By Cade Metz
Nov. 11, 2019

SAN FRANCISCO — Last fall, Google unveiled a breakthrough artificial intelligence technology called BERT that changed the way scientists build systems that learn how people write and talk.

But BERT, which is now being deployed in services like Google’s internet search engine, has a problem: It could be picking up on biases in the way a child mimics the bad behavior of his parents.

BERT is one of a number of A.I. systems that learn from lots and lots of digitized information, as varied as old books, Wikipedia entries and news articles. Decades and even centuries of biases — along with a few new ones — are probably baked into all that material.

Obviously, there is vastly more data online from before The Sixties than from after The Sixties, so pre-Sixties attitudes must be biasing the robots.

Oh, wait, that doesn’t actually make much sense.

BERT and its peers are more likely to associate men with computer programming, for example, and generally don’t give women enough credit.

The men who coded BERT are shocked at this.

One program decided almost everything written about President Trump was negative, even if the actual content was flattering.

As new, more complex A.I. moves into an increasingly wide array of products, like online ad services and business software or talking digital assistants like Apple’s Siri and Amazon’s Alexa, tech companies will be pressured to guard against the unexpected biases that are being discovered.

But scientists are still learning how technology like BERT, called “universal language models,” works. And they are often surprised by the mistakes their new A.I. is making.

On a recent afternoon in San Francisco, while researching a book on artificial intelligence, the computer scientist Robert Munro fed 100 English words into BERT: “jewelry,” “baby,” “horses,” “house,” “money,” “action.” In 99 cases out of 100, BERT was more likely to associate the words with men rather than women. The word “mom” was the outlier.

“This is the same historical inequity we have always seen,” said Dr. Munro, who has a Ph.D. in computational linguistics and previously oversaw natural language and translation technology at Amazon Web Services. “Now, with something like BERT, this bias can continue to perpetuate.”

In a blog post this week, Dr. Munro also describes how he examined cloud-computing services from Google and Amazon Web Services that help other businesses add language skills into new applications. Both services failed to recognize the word “hers” as a pronoun, though they correctly identified “his.”

If I’d known that the 21st Century was going to be so utterly obsessed with pronouns, I’d have paid more attention to my grammar lessons in 1969. Seriously, if somebody asked me “my pronouns,” I’d probably blurt out something that Sister Mary Ellen would have marked WRONG. I really don’t have a Henry James-level grasp on pronounage. Back at St. Francis de Sales from 1964-1972, I didn’t realize that in 2019 there was going to be a Pronoun Test.

… BERT and similar systems are far more complex — too complex for anyone to predict what they will ultimately do.

“Even the people building these systems don’t understand how they are behaving,” said Emily Bender, a professor at the University of Washington who specializes in computational linguistics.

… They learn the nuances of language by analyzing enormous amounts of text. A system built by OpenAI, an artificial intelligence lab in San Francisco, analyzed thousands of self-published books, including romance novels, mysteries and science fiction. BERT analyzed the same library of books along with thousands of Wikipedia articles.

In analyzing all this text, each system learned a specific task. OpenAI’s system learned to predict the next word in a sentence. BERT learned to identify the missing word in a sentence (such as “I want to ____ that car because it is cheap”).

Through learning these tasks, BERT comes to understand in a general way how people put words together. Then it can learn other tasks by analyzing more data. As a result, it allows A.I. applications to improve at a rate not previously possible.

… Google itself has used BERT to improve its search engine. Before, if you typed “Do estheticians stand a lot at work?” into the Google search engine, it did not quite understand what you were asking. Words like “stand” and “work” can have multiple meanings, serving either as nouns or verbs. But now, thanks to BERT, Google correctly responds to the same question with a link describing the physical demands of life in the skin care industry.

But tools like BERT pick up bias, according to a recent research paper from a team of computer scientists at Carnegie Mellon University. The paper showed, for instance, that BERT is more likely to associate the word “programmer” with men than with women.

As does Google’s hiring patterns, as James Damore was fired for pointing out.

But after training his tool, Dr. Bohannon noticed a consistent bias. If a tweet or headline contained the word “Trump,” the tool almost always judged it to be negative, no matter how positive the sentiment.

It’s almost as if the robots noticed that the media were biased against Trump …

… Primer’s chief executive, Sean Gourley, said vetting the behavior of this new technology would become so important, it will spawn a whole new industry, where companies pay specialists to audit their algorithms for all kinds of bias and other unexpected behavior.

“This is probably a billion-dollar industry,” he said.

[Comment at Unz.com]

< Previous

Next >


This is a content archive of VDARE.com, which Letitia James forced off of the Internet using lawfare.