The most talked-about, futuristic product from Google’s developer show isn’t even finished — but it’s already stoking heated debate.
At its I/O conference on Tuesday, Google previewed Duplex, an experimental service that lets its voice-based digital assistant book appointments on its own. It was part of a slate of features, such as automated writing in e-mails, where Google touted how its artificial intelligence technology saves people time and effort. In a demonstration on stage, the Google Assistant spoke with a hair salon receptionist, mimicking the “ums” and “hmms” pauses of human speech. In another demo, it chatted with a restaurant employee to book a table. The audience of software coders cheered.
Outside the Google technology bubble, critics pounced. The company is placing robots in conversations with humans, without those people realising. The obvious question soon followed: should AI software that’s smart enough to trick humans be forced to disclose itself. Google executives don’t have a clear answer yet. Duplex emerged at a sensitive time for technology companies, and the feature hasn’t helped alleviate questions about their growing power over data, automation software and the consequences for privacy and work.
“Horrifying,” Zeynep Tufekci, a professor and frequent tech company critic, wrote on Twitter about Duplex. “Silicon Valley is ethically lost, rudderless and has not learnt a thing.”
Robotic voices should always sound “synthetic” rather than human, wrote Stewart Brand, an author who advocates for long-term thinking and responsibility in the face of advancing technology and other trends. “Successful spoofing of any kind destroys trust.”
It was even a topic on sports radio. On Wednesday morning, the Murph & Mac show on KNBR in San Francisco played a clip of Duplex talking. Soon you could get a call and wonder if the voice on the other end is real or not, the hosts said.
As in previous years, Google unveiled a feature before it was ready. Google is still debating how to unleash it, and how human to make the technology, several employees said during the conference. That debate touches on a far bigger dilemma for Google: as the company races to build uncanny, human-like intelligence, it is wary of any missteps that cause people to lose trust in using its services.
Scott Huffman, an executive on Google’s Assistant team, said the response to Duplex was mixed. Some people were blown away by the technical demos, while others were concerned about the implications. Huffman said he understands the concerns. Although he doesn’t endorse one proposed solution to the creepy factor: giving it an obviously robotic voice when it calls. “People will probably hang up,” he said.
In an interview on Wednesday, Huffman suggested the machine could say something like, “I’m the Google Assistant and I’m calling for a client.” More experiments are planned for this summer, he noted.
Another Google employee working on the assistant seemed to disagree. “We don’t want to pretend to be a human,” designer Ryan Germick said when discussing the digital assistant at a developer session earlier on Wednesday.
Germick did agree, however, that Google’s aim was to make the assistant human enough to keep users engaged. The unspoken goal: keep people asking questions and sharing information with the company — which can use that to collect more data to improve its answers and services.
There’s a thin line between Google’s aim of making its assistant like a human and not deceiving real humans with software like Duplex. Google consciously decided against giving the assistant a real human background. When it’s asked how old it is, or where it was born, it either avoids the question or says clever things like “I was born in a meeting”.
Specific tasks
Duplex has been designed to perform a limited range of very specific tasks. Google’s AI technology isn’t smart enough to learn to do many other things quickly. If the human on the other end of the line asked questions about something other than hair or restaurants, Duplex wouldn’t have a human answer and may well end the call — making it clear it is software. One Googler compared it to OpenTable’s online restaurant reservation system, which automates the process online. No one worries that system will dupe humans by learning to do other tasks, the employee noted.
Another Google staffer privately questioned if the response would have been as visceral if Google showed Duplex handling more annoying consumer predicaments, like a billing issue with a cable TV provider.
The debate didn’t end with realistic robo-calling. Douglas Eck is a scientist at Magenta, a Google AI project researching the use of machine learning to create music, video, images and text. He was asked about his vision of the future in front of a packed audience of developers at I/O on Wednesday.
Eck said machine learning, a powerful form of AI, will be integrated into how humans communicate with each other. He raised the idea of “assistive writing” in the future with Google Docs, the company’s online word processing software. This may be based on Google’s upcoming Smart Compose technology that suggests words and phrases based on what’s being typed. Teachers used to worry about whether students used Wikipedia for their homework. Now they may wonder what part of the work the students wrote themselves, Eck said.
This could be a dystopian vision, but it doesn’t have to be that way, the Google scientist concluded. He compared it to the electric guitar, a technology that helped humans express themselves in new ways and is considered a positive advance. — Reported by Mark Bergen, with assistance from Andrew Pollack, (c) 2018 Bloomberg LP