Making my way through a tidal wave of content responding to the Google voice bot demo at Google I/O. We wrote about it in this post: Watch Google Assistant make a phone call to schedule an appointment. Stunning.
If you have not yet seen the demo, click over and watch it now. It’s short.
With that demo in mind, take a minute to scan through Jason Kottke’s piece, Should our machines sound human?, which obviously was the inspiration for part one of the title of this post.
With all that under your belt, here’s my take.
Subtly weaving human queues (like “uhh”) into the conversation clearly made the AI voice believable, fooled the listener into thinking it was interacting with a human. The tech behind the voice bot was quite robust, responding quickly and on target, ultimately solving the intended problem.
Clearly, Google Duplex is a powerful tool that can solve a host of problems, including taking over voice interactions to save humans time, and providing a voice for the voiceless, or those uncomfortable with human interaction.
So far, we’ve got sophisticated, groundbreaking tech, along with a tool to provide a voice to help us.
But along with that comes a series of ethical issues. Is it OK for a machine to fool us with a human voice? And what about informed consent?
One issue with the demo was the fact that it was clearly based on trickery. The person being called did not know they were speaking with a machine. The audience applause was, in part, appreciation of how well the voice bot did its job, how convincingly it portrayed a human.
Informed consent. Does Google have an obligation here to make it clear that the voice is machine generated? Should I have the right to opt out of automated voice bot communications?
Another issue for me is the worry about this tool falling into the wrong hands. I can only imagine someone crafting a clever script that automated a typical spambot strategy.
Instead of a simple recording trying to trick me into their spider’s web, imagine a voice bot with a wealth of detailed, personalized knowledge about me at its disposal, with the goal of convincing me to hand over my banking or credit card info. That voice bot, acting at the behest of the dark side, would use machine learning to improve its technique, slowly improving the response scores and adjusting in real time to changes in customer responses.
I don’t see Google being a bad actor here. They are genuinely excited about the brave new world they are building. But the fact that they did not weigh in with any awareness of the ethical issues that come along with this sophisticated tech is worrisome.
From this Anil Dash Twitter thread about informed consent:
Any interaction with technology or the products of tech companies must be exist within a context of informed consent. Something like #GoogleDuplex fails this test, by design. That’s an unfixable flaw.
And this response from Dieter Bohn:
Google told me yesterday that it does believe it needs to do that – but doesn’t know they best way yet.
This should have been part of the preface, the lead up to the demo. These ethical issues should be worked through at the same time, presented side-by-side with the technology which opens the Pandora’s Box in the first place.