ARTICLE AD
No one likes robocalls to begin with, but using AI-generated voices of people like President Biden makes them even worse. As such the FCC is proposing that using voice cloning tech in robocalls be ruled fundamentally illegal, making it easier to charge the operators of these frauds.
You may ask why it’s necessary if robocalls are illegal to begin with. In fact some automated calls are necessary and even desirable, and it’s only when a call operation is found to be breaking the law in some way that it becomes the business of the authorities.
For example, regarding the recent fake Biden calls in New Hampshire telling people not to vote, the Attorney General there can (and did) say with confidence that the messages “appear to be an unlawful attempt to disrupt the New Hampshire Presidential Primary Election and to suppress New Hampshire voters.”
Under the law there, voter suppression is illegal and so, when they track down the perpetrators (and I’m emailing them constantly to find out if they have, by the way) that will be what they are charged with, likely among other things. But it remains that a crime must be committed, or reasonably suspected to have been committed, for the authorities to step in.
If employing voice cloning tech in automated calls, like what was obviously used on Biden, is itself illegal, that makes charging robocallers that much easier.
“That’s why the FCC is taking steps to recognize this emerging technology as illegal under existing law, giving our partners at State Attorneys General offices across the country new tools they can use to crack down on these scams and protect consumers,” said FCC Chairwoman Jessica Rosenworcel in a news release. They previously announced that they were looking into this back when the problem was relatively fresh.
The FCC already uses the Telephone Consumer Protection Act as the basis for charging robocallers and other telephone scammers. The TCPA already prohibits “artificial” voices, but it is not clear that cloned voices fall under that category. It’s arguable, for instance, that a company could use the generated voice of its CEO for legitimate business purposes.
But the fact is that legal applications of the tech are fewer in number and less immediately important than the illegal applications. Therefore the FCC proposes to issue a Declaratory Ruling that AI-powered voice cloning causes a call to fall under the “artificial” heading.
The law here is being rapidly iterated as telephone, messaging, and generative voice tech all evolve. So don’t be surprised if it isn’t entirely clear what is and isn’t illegal, or why despite being obviously illegal, some calls or scams seem to operate with impunity. It’s a work in progress.
I’ve asked the FCC for more details on the exact actions being taken, and will update this post if I hear back.