“Instead of Alexa’s voice reading the book, it’s the child’s grandma’s voice,” Rohit Prasad, senior vice president and chief scientist of Alexa’s artificial intelligence, excitedly explained during a keynote address in Las Vegas on Wednesday. (Amazon founder Jeff Bezos owns the Washington Post.)
The demo was the first glimpse of Alexa’s latest feature, which would allow the voice assistant – although still in development – to replicate people’s voices from short audio clips. The goal, Prasad said, is to build more trust among users by infusing artificial intelligence with the “human attributes of empathy and affect.”
The new feature could “do [loved ones’] Memories remain,” said Prasad. But while the prospect of hearing the voice of a dead relative hurts, it also raises a host of safety and ethical concerns, experts said.
“I don’t think our world is ready for easy-to-use voice cloning technology,” Rachel Tobac, chief executive officer of San Francisco-based SocialProof Security, told The Washington Post. Such technology, she added, could be used to manipulate the public through fake audio or video clips.
“If a cybercriminal can easily and believably reproduce another person’s voice with a small sample, they can use that sample to impersonate other people,” added Tobac, a cybersecurity expert. “This bad actor can then trick others into believing they are the person they are impersonating, which can lead to fraud, data loss, account takeover and more.”
Then there is a danger that the lines between the human and the mechanical will blur, says Tama Leaver, professor of internet studies at Curtin University in Australia.
“You won’t remember talking to the depths of Amazon … and its data harvesting services whether it speaks in the voice of your grandmother or grandfather or that of a lost loved one.”
“In a way it’s like an episode of ‘Black Mirror,'” Leaver said, referring to it the sci-fi series that envisions a tech-themed future.
The new Alexa feature also raises questions about consent, Leaver added — especially for people who never imagined their voice being strapped out by a robotic personal assistant after they died.
“There’s a really slippery tendency to use the data of deceased people in a way that’s just plain creepy on the one hand, but on the other hand it’s deeply unethical because they never considered those clues being used in that way,” Leaver said.
After recently losing his grandfather, Leaver said he feels “temptation” to want to hear the voice of a loved one. But the possibility opens a floodgate of implications that society may not be willing to accept, he said – for example, Who owns the rights to the little snippets people leave in the airwaves of the World Wide Web?
“If my grandfather sent me 100 messages, should I have the right to put them in the system? And if I do, who owns it? Does Amazon own this recording then?” he asked. “Have I given up the rights to my grandfather’s voice?”
Prasad did not address such details during Wednesday’s address. However, he posited that the ability to mimic voices was a product of “unquestionably living in the golden era of AI, where our dreams and science fiction become reality.”
Should Amazon’s demo become a real feature, Leaver said, people might need to think about how their voices and likeness could be used when they die.
“Do I need to consider in my will that I have to say, ‘My voice and my story on social media are owned by my children and they can choose whether or not to reanimate that in chat with me? ‘” Leaver wondered.
“Now that’s weird to say. But it’s probably a question we should have an answer to before Alexa starts talking like me tomorrow,” he added.