Thanks to voice cloning, if hearing was ever believing, it isn’t any more. With recent advances in AI technology, it’s now possible to use samples of your voice to create a recording of “you” saying just about anything. It’s the sound of deceit
There are some potentially beneficial uses of this novel capability—maybe audiobooks will get cheaper if bots can do the bulk of the work—but we’re here to focus on the risks. And they’re significant.
Imagine a late-night phone call from a strange number. You answer, and on the other end of the line is your college-age son, who’s studying in South America for a semester. He’s desperate, mumbling something about corrupt cops and an angry cab driver with a knife. He’s OK, but he needs money to sort it out. He reads off an email address.
What would you do? Yes, he sounds strange, but it’s a bad connection and he’s in distress. You’d act, right? Of course. You’re a parent. You’d try to help. Thing is, you’re not talking to your son. This is voice cloning at work. You’re talking to a fabricated recording of your son’s voice, made up of little bits of his voice pulled from videos on his social media feeds and assembled by AI into a worrisome plea.
Voice cloning adds seemingly authenticity to deepfake videos, making them all the more convincing. That’s concerning on an individual basis, as in the example above, and on a societal basis. Imagine the ramificiations of a deepfake involving world leader.
What can you do?