Artificial intelligence is getting smarter — and seemingly less artificial

Meet BINA48, a head-and-shoulders animatron with 32 facial-control motors under a rubber-like skin, a microphone in its ear, a speaker, and cameras in its lifelike eyeballs. It also has a zipper in its neck so engineers can get at the works in its head. In what its creators call a “mindfile,” BINA48 holds dozens of hours of reminiscences, as well as the speech mannerisms, of an actual woman who agreed to transfer her memories and personality to a software program.

When asked relevant questions, the robot can tell stories from the woman’s past. It can recognize people she’s met, engage in elementary conversations with reasonable skill, and even know when it’s appropriate to make a joke or two. The device is called BINA48 because its human “sister” is named Bina and because the human brain can make 48 exaflops of calculations (1018) per second, a speed not yet reached by any computer.

BINA48 is largely a research and demonstration project but her cousins are already at work, hinting at a future populated by artificially intelligent robots, on-screen avatars and appliances that do everything from keeping us company to becoming our personal health coaches. They are also raising questions about how we’ll live with, and perhaps be changed by, software and machines all too seductively human.

Among them is Kaspar, one of several robots bringing children with autism out of their shroud of isolation. The child-size robot — with its limited motion and facial expressions — can be operated remotely by a therapist to model social behavior and conversation. After playing with the robot for as little as 10 minutes, many children with autism begin to show social behavior they had never achieved with a human therapist.

There are several Kaspar-like robots showing promise in autism therapy. In a Notre Dame study, 17 of 19 children with autism who interacted with a robot improved their social skills, using their new abilities with human therapists and at home.

Robotic therapy has a big future

Why robots? Children with autism are easily over-stimulated; robots, with their limited facial and vocal expressions, don’t overwhelm these children. A robot also never loses patience, will repeat the same words or gestures endlessly, and never is ruffled by a child’s sudden or seemingly inappropriate behavior. Building on this promising foundation, researchers envision programmable, artificially intelligent, autonomous robots that can interact with children and yield therapeutic results.

But artificial intelligence doesn’t reside only in robots. Computers and handheld devices such as smartphones already serve as companions and minders. Apple’s iPhones have Siri (and cars soon will, too), which carries out simple tasks and keeps up its end of very limited conversations. Microsoft has its Siri-like Cortana for Windows products.

Before Siri and Cortana, there was Laura. Laura was the animated on-screen presence in FitTrack, an artificially intelligent software program that debuted almost a decade ago to help college students stick to an exercise program. Laura was the coach. She could make appropriate greeting and parting comments, review individualized fitness plans and goals, and give encouragement when needed. Laura looked pleased when goals were met; if a student failed to meet exercise goals, Laura would ask why and generate an appropriately concerned and sympathetic facial expression when the student said, “I had finals this week” or “I strained a muscle.”

The Northeastern University researchers who designed the system found students coached by Laura kept to their exercise plans longer that students who used a text-based coach or an on-screen avatar that lacked human affect. When a companion program was created to encourage seniors to walk more, those using the program did significantly more walking than people in a control group.

Another experiment from Northeastern’s Artificial Intelligence Group employed a similar animated figure to carry out the duties of a hospital discharge nurse. Discharge nurses spent time with patients about to leave a hospital, reviewing a booklet describing medication usage, follow-up physician appointments and providing answers to patient’s questions. In the study, dealing with the animated figure instead of a real nurse was the patient’s clear preference. A human nurse was often harried and impatient, but the on-screen nurse explained everything, checked patients’ understanding by using short quizzes and referring them to the booklet to correct misunderstandings, and took all the time patients needed. The patients consistently referred to the software program as “she,” the gender embodied by the on-screen image.

Elderly people are a special target for these new “conversational agents.” Another Northeastern University research team created a computer-based buddy to keep senior citizens living alone from being lonely and suffering the range of ills, such as depression, that flow from isolation. The more that older folks used the program, the less lonely they felt compared to compatriots in a control group. The AI-agent was especially effective when it initiated conversations rather than waiting for a person to talk to it.

Eldercare via virtual aides

With the ranks of elders swelling faster than the number of human caregivers available to them, these virtual buddies could potentially be programmed to do more than make small talk. They might engage elders in games that strengthen memory, reasoning and cognitive abilities; encourage them to exercise; use a person’s conversational cues or responses to questions to diagnose emotional disturbances; and even provide rudimentary therapy.

Other artificial intelligences are relieving some of the burden of caring for adults with Alzheimer’s disease. A device the size of a drugstore paperback, built by researchers at the University of Washington, contains a GPS system, can learn a person’s normal movements and, for example, knowing that if it’s 4 o’clock and that person should be going to the kitchen to take medication, will prompt him to get a move-on. Versions that can guide people to bus stops or help them find a car in a parking lot are in development.

Eventually, artificially intelligent houses will issue reminders and instructions to Alzheimer’s patients through speakers in the walls, enabling them to live at home longer and depend less on scarce, expensive human caregivers. A system in development by a consortium of English universities is working on a computer avatar to gauge a person’s facial expressions and vocal qualities to determine if she is having a medical emergency. This, and other AI aids, could be in homes by 2020.

The reason such artificial intelligence systems are so effective is that they push what researchers call our “Darwinian buttons” — the parts of our brains that analyze other beings’ behavior to determine their intentions. Our brains are hard-wired to “anthropomorphize,” which means we assign human intentions and feelings to entities that show signs of human-like behavior.

Engineers who design AI systems often leverage this fact: Because a simple robot or an animated screen character smiles, looks sad or simply moves, we respond to that artificial creation as if it’s alive and sentient. This allows AI designers to cut a few corners when creating systems, knowing our imaginations will “fill in the blanks” if the software itself doesn’t deliver a perfect human knockoff.

The trouble is that when we anthropomorphize an object, we begin to empathize with it. On-screen animated characters that front for AI software programs are routinely described as “friendly” or “sympathetic” or “cheerful” by people using them. When that happens, thanks to our Darwinian buttons, it’s like naming the stray dog you found: you’ve bonded with it and suddenly you can’t bear to part.

For example, the military had developed a 5-foot-long robo
t, modeled on a multi-legged stick insect, whose job was to blow up land mines. The robot would step on a mine, which would blow off one of its legs, and keep going until it had lost all its limbs. In a field test, it worked perfectly — walking through a minefield, successfully blowing up mines and losing its legs. Finally, it had one leg left, which it used to drag itself forward in search of one more mine. The researcher running the test was about to declare success when the colonel observing the test couldn’t bear it any longer. Seeing this creature — burned, wounded, crippled — ready to literally give its last leg for the team, he declared the test “inhumane” and demanded it be stopped.

The hidden cost of artificial intelligence

What researchers are learning is this: As AI shows its promise in preventing loneliness, nudging us to exercise, or triaging persons with mental or physical health crises, our technological creativity is again outstripping our intellectual and emotional ability to cope with our creations.

Mattias Scheutz, a computer scientist at Tufts University, is among a growing chorus calling for AI systems to be wired with a form of moral reasoning. Scheutz hypothesizes that sophisticated AI robots sold like toys could prod children to beg parents to buy more “friends” made by the same company.

An artificially intelligent on-screen companion to an elderly shut-in might demand certain new software or insist on dictating the elderly person’s behavior under threat of going silent. Software doesn’t care if you don’t talk to it for three days, Scheutz notes, but a lonely old person could be devastated by a sulking artificial buddy. And humans are scarcely prepared to discover that they’ve invested empathy, dependence and affection in a collection of virtual intelligences that are actually incapable of caring about us.

Peter Kahn, a psychologist at the University of Washington who studies relationships between humans and technology, expresses a parallel concern. He worries that artificial intelligence could render more and more of us “socially autistic”: Embroiled ever deeper in relationships with human-like entities of limited capacities, we could risk losing our ability to create and sustain full relationships with other people.

After all, one of the most seductive features of AI systems is their lack of humanity. They’re always pleasant, always even-tempered, always earnest. They don’t crack their knuckles or slurp their soup; they’re like pets that never dig holes in the lawn, claw the woodwork or soil the carpet. They’re always eager to do things for you. Who wouldn’t like to spend a little more time with companions like that? And that’s a source of their danger as well as their utility.

As Sherry Turkle, a pioneering philosopher of technology, muses, “These days, insecure in our relationships and anxious about intimacy, we look to technology for ways to be in relationships and protect ourselves from them at the same time.”

Artificial intelligence is, at its core, artificial.

Comments are closed.

Skip to content