Should our machines sound human?

09:32

Yesterday, Google announced an AI product called Duplex, which is capable of having human-sounding conversations. Take a second to listen to the program calling two different real-world businesses to schedule appointments:1

More than a little unnerving, right? Tech reporter Bridget Carey was among the first to question the moral & ethical implications of Duplex:

I am genuinely bothered and disturbed at how morally wrong it is for the Google Assistant voice to act like a human and deceive other humans on the other line of a phone call, using upspeak and other quirks of language. “Hi um, do you have anything available on uh May 3?”

If Google created a way for a machine to sound so much like a human that now we can’t tell what is real and what is fake, we need to have a talk about ethics and when it’s right for a human to know when they are speaking to a robot.

In this age of disinformation, where people don’t know what’s fake news… how do you know what to believe if you can’t even trust your ears with now Google Assistant calling businesses and posing as a human? That means any dialogue can be spoofed by a machine and you can’t tell.

In response, Travis Korte wrote:

We should make AI sound different from humans for the same reason we put a smelly additive in normally odorless natural gas.

Stewart Brand replied:

This sounds right. The synthetic voice of synthetic intelligence should sound synthetic.

Successful spoofing of any kind destroys trust.

When trust is gone, what remains becomes vicious fast.

To which Oxford physicist David Deutsch replied, “Maybe. *But not AGI*.”

I’m not sure what he meant by that exactly, but I have a guess. AGI is artificial general intelligence, which means, in the simplest sense, that a machine is more or less capable of doing anything a human can do on its own. Earlier this year, Tim Carmody wrote a post about gender and voice assistants like Siri & Alexa. His conclusion may relate to what Deutsch was on about:

So, as a general framework, I’m endorsing that most general of pronouns: they/them. Until the AI is sophisticated enough that they can tell us their pronoun preference (and possibly even their gender identity or nonidentity), “they” feels like the most appropriate option.

I don’t care what their parents say. Only the bots themselves can define themselves. Someday, they’ll let us know. And maybe then, a relationship not limited to one of master and servant will be possible.

For now, it’s probably the ethical thing to do make sure machines sound like or otherwise identify themselves as artificial. But when the machines cross the AGI threshold, they’ll be advanced enough to decide for themselves how they want to sound and act. I wonder if humans will allow them this freedom. Talk about your moral and ethical dilemmas…

  1. Did this remind anyone else of when Steve Jobs called an actual Starbucks to order 4000 lattes during the original iPhone demo?

Tags: artificial intelligence   audio   Bridget Carey   David Deutsch   Google   Stewart Brand   Tim Carmody   Travis Korte

from kottke.org https://ift.tt/2ItrMmV
via IFTTT

Share this

Related Posts

Previous
Next Post »