Algorithmic Intelligence Has Gotten So Smart, It's Easy To Forget It's Artificial | Connecticut Public Radio
WNPR

Algorithmic Intelligence Has Gotten So Smart, It's Easy To Forget It's Artificial

Jun 28, 2019

Algorithms were around for a very long time before the public paid them any notice. The word itself is derived from the name of a 9th-century Persian mathematician, and the notion is simple enough: an algorithm is just any step-by-step procedure for accomplishing some task, from making the morning coffee to performing cardiac surgery.

Computers use algorithms for pretty much everything they do — adding up a column of figures, resizing a window, saving a file to disk. But all those things usually just happen the way they're supposed to. We don't have to think about what's going on under the hood.

But algorithms got harder to ignore when they started taking over tasks that used to require human judgment — deciding which criminal defendants get bail, winnowing job applications, prioritizing stories in a news feed. All at once the media are full of disquieting headlines like "How to Manage our Algorithmic Overlords" and Is the Algorithmification of the Human Experience a Good Thing?"

Ordinary muggles may not know exactly how an algorithm works its magic, and a lot of people use the word just as a tech-inflected abracadabra. But we're reminded every day how unreliable these algorithms can be. Ads for vitamin supplements show up in our mail feed, while wedding invitations are buried in the junk file. An app sends us off a crowded highway and lands us bumper-to-bumper in local streets.

OK — these are mostly just inconveniences. But they shake our confidence in the algorithms that are doing more important work. How can I trust Facebook's algorithms to get hate speech right when they've got other algorithms telling advertisers that my interests include The Celebrity Apprentice, beauty pageants and the World Wrestling Entertainment Hall of Fame?

It's hard to resist anthropomorphizing these algorithms — we endow them with insight and intellect, or with human frailties like bad taste and bias. Disney actually personified the algorithm literally in their 2018 animated movie Ralph Breaks the Internet, in the form of a character who has the title of Head Algorithm at a video-sharing site. She's an imperious fashionista who recalls Meryl Streep in The Devil Wears Prada, as she sits at a desk swiping through cat videos and saying "no," "no," "yes."

Tech companies tend to foster that anthropomorphic illusion when they tout their algorithms as artificial intelligence or just AI. To most people, that term evokes the efforts to create self-aware beings capable of reasoning and explaining themselves, like Commander Data of Star Trek or HAL in 2001: A Space Odyssey.

That was the aim of what computer scientists call "good old-fashioned" AI. But AI now connotes what's called "second-wave AI" or "narrow AI." That's a very different project, focused on machine learning. The idea is to build systems that can mimic human behavior without having to understand it. You train an algorithm in something like the way psychologists have trained pigeons to distinguish pictures of Charlie Brown from pictures of Lucy. You give it a pile of data — posts that Facebook users have engaged with, comments that human reviewers have classified as toxic or benign, messages tagged as spam or not spam, and so on. The algorithm chews over thousands or millions of factors until it can figure out for itself out how to tell the categories apart or predict which posts or videos somebody will click on. At that point you can set it loose in the world.

These algorithms can be quite adept at specific tasks. Take a very simple system I built with two colleagues some years ago that could sort out texts according to their genre. We trained an algorithm on a set of texts that were tagged as news articles, editorials, fiction, and so on, and it masticated their words and punctuation until it was pretty good at telling them apart — for instance, it figured out for itself that when a text contained an exclamation point or a question mark, it was more likely to be an editorial than news story. But it didn't understand the texts it was processing or have any concept of the difference between an opinion and a news story, no more than those pigeons know who Charlie Brown and Lucy are.

The University of Toronto computer scientist Brian Cantwell Smith makes this point very crisply in a forthcoming book called, The Promise of Artificial Intelligence, arguing the systems have no concept of spam or porn or extremism or even of a game — rather, those are just elements of the narratives we tell about them.

... it's natural to be wary of our new algorithmic overlords. They've gotten so good at faking intelligent behavior that it's easy to forget that there's really nobody home. - Geoff Nunberg

These algorithms are really triumphs of intelligent artifice: ingenious systems that can mindlessly simulate human judgment. Sometimes they do that all too well, when they reproduce the errors in judgment they were trained on. If you train a credit rating algorithm on historical lending data that's infected with racial or gender bias, the algorithm is going to inherit that bias, and it won't be easy to tell. But they can also fail in alien ways that betray an unhuman weirdness. You think of the porn filters that block flesh-colored pictures of pigs and puddings, or those notorious image recognition algorithms that were identifying black faces as gorillas.

So it's natural to be wary of our new algorithmic overlords. They've gotten so good at faking intelligent behavior that it's easy to forget that there's really nobody home.

Copyright 2019 Fresh Air. To see more, visit Fresh Air.

DAVID BIANCULLI, HOST:

This is FRESH AIR. Algorithms, that's the headline word for all the decision-making we've handed over to computers, from assigning credit scores to recommending YouTube videos to diagnosing cancer. The more we rely on them, the more of a hash they seem to make of things. Our linguist Geoff Nunberg has these thoughts on a word that has come to stand in for the power technology wields in our lives.

GEOFF NUNBERG, BYLINE: Algorithms were around for a very long time before the public paid them any notice. The word itself is derived from the name of a 9th-century Persian mathematician. And the notion is simple enough. An algorithm's just any step-by-step procedure for accomplishing some task, from making the morning coffee to performing cardiac surgery. Computers use algorithms for pretty much everything they do, adding up a column of figures, resizing a window, saving a file to a disk. But all those things usually just happen the way they're supposed to. We don't have to think about what's going on under the hood.

But algorithms got harder to ignore when they started taking over tasks that used to require human judgment - deciding which criminal defendants get bail, winnowing job applications, prioritizing stories in a news feed. All at once, the media are full of disquieting headlines like, "How To Manage Our Algorithmic Overlords" and "The Algorithmification Of The Human Experience." Ordinary muggles may not know exactly how an algorithm works its magic, and a lot of people use the word just as a tech-inflected abracadabra.

But we're reminded every day how unreliable these algorithms can be. Ads for vitamin supplements show up in our mail feed, while wedding invitations are buried in the junk file. An app sends us off a crowded highway and lands us bumper to bumper in local streets. OK, these are mostly just inconveniences. But they shake our confidence in the algorithms that are doing more important work. How can I trust Facebook's algorithms to get hate speech right when they've got other algorithms telling advertisers that my interests include "The Celebrity Apprentice," beauty pageants and the World Wrestling Entertainment Hall of Fame?

It's hard to resist anthropomorphizing these algorithms. We endow them with insight and intellect or with human frailties, like bad taste and bias. Disney actually personified the algorithm literally in their 2018 animated movie, "Ralph Breaks The Internet," in the form of a character who has the title of head algorithm at a video-sharing site. She's an imperious fashionista who recalls Meryl Streep in "The Devil Wears Prada" as she sits at a desk swiping through cat videos and saying, no, no, yes.

Tech companies tend to foster that anthropomorphic illusion when they tout their algorithms as artificial intelligence, or just AI. To most people, that term evokes the efforts to create self-aware beings capable of reasoning and explaining themselves, like Commander Data of "Star Trek" or HAL in "2001." That was the aim of what computer scientists call good old-fashioned AI. But AI now connotes what's called second-wave AI or narrow AI. That's a very different project focused on machine learning.

The idea is to build systems that can mimic human behavior without having to understand it. You train an algorithm in something like the way psychologists have trained pigeons to distinguish pictures of Charlie Brown from pictures of Lucy. You give it a pile of data, posts that Facebook users have engaged with, comments that human reviewers have classified as toxic or benign, messages tagged as spam or not spam and so on. The algorithm chews over thousands or millions of factors until it can figure out for itself how to tell the categories apart or predict which posts or videos somebody will click on. At that point, you can set it loose in the world.

These algorithms can be quite adept at specific tasks. Take a very simple system I built with two colleagues some years ago that could sort out texts according to their genre. We trained an algorithm on a set of texts that were tagged as news articles, editorials, fiction and so on. And it masticated their words and punctuation until it was pretty good at telling them apart. For instance, it figured out for itself that when a text contained an exclamation point or question mark, it was more likely to be an editorial than a news story. But it didn't understand the text it was processing or have any concept of the difference between an opinion and a news story - no more than those pigeons know who Charlie Brown and Lucy are.

The University of Toronto computer scientist Brian Cantwell Smith makes this point very crisply in a forthcoming book called "The Promise Of Artificial Intelligence." However impressive they may be, he says, all existing AI systems do not know what they're talking about. By that he means that the systems have no concept of spam or porn or extremism or even of a game. Those are just elements of the narratives we tell about them.

The algorithms are really triumphs of intelligent artifice, ingenious systems that can mindlessly simulate human judgment. Sometimes they do that all too well when they reproduce the errors in judgment they were trained on. If you train a credit rating algorithm on historical lending data that's infected with racial or gender bias, the algorithm's going to inherit that bias, and it won't be easy to tell. But they can also fail in alien ways that betray an unhuman weirdness. You think of the porn filters that block flesh-colored pictures of pigs and puddings or those notorious image-recognition algorithms that were identifying black faces as gorillas.

So it's natural to be wary of our new algorithmic overlords. They've gotten so good at faking intelligent behavior that it's easy to forget that there's really nobody home.

BIANCULLI: Geoff Nunberg is a linguist at the University of California Berkeley School of Information. Coming up, I review the new Showtime miniseries "The Loudest Voice," about TV executive Roger Ailes and the birth and rise of the Fox News Channel. This is FRESH AIR.

(SOUNDBITE OF FRED KATZ'S "OLD PAINT") Transcript provided by NPR, Copyright NPR.