California to Ban Deepfakes, Regulate AI

AP Photo/Richard Drew, File

The California state legislator just closed out its current session by passing a massive raft of new bills and sending them to Governor Gavin Newsom's desk for final approval. The majority of the legislation is just the usual housekeeping and budgetary requirements, but a couple of the bills merit further attention. One of these is a proposed ban on election "deepfake" videos and images. The other would mandate additional regulation of the artificial intelligence industry and seek to protect human jobs and careers that are currently being endangered by this emerging technology. You might view these aspirations as a positive step in the right direction given all the fuss and muss surrounding artificial intelligence these days, along with a proliferation of clearly fake campaign advertisements and videos. But are such restrictions and regulations even possible? And if so, will they be putting a chokehold on AI technology in the United States while leaving our adversaries free to outpace us in an open field? (Associated Press)

Advertisement

California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.

The California Legislature, which is controlled by Democrats, is voting on hundreds of bills during its final week of the session to send to Gov. Gavin Newsom’s desk. Their deadline is Saturday.

The Democratic governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature. Newsom signaled in July he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation.

Let's start with what they plan to do about these supposed "deepfake" (not to be confused with cheap fake) videos. The state is seeking to ban "deepfakes related to elections." That seems rather overly broad because almost anything could be suggestive of one election issue or another these days. Social media platforms would be required to remove deepfake videos 120 days before Election Day and 60 days thereafter or be penalized. But who is going to police all of this? The majority of the country is on social media and virtually everyone today has access to free AI tools that can create incredibly realistic images in that fashion. It would be a seemingly impossible task to identify them all, to say nothing of working through the removal process. And what if a campaign releases a fake video of its own candidate? Would that have to be removed if they approved of it?

Advertisement

They also want to make it illegal to create images and videos of child sexual abuse. That's absolutely a noble goal, but if the "child" being portrayed is completely an artificial construct, who is the "victim" in this crime? How do you correctly identify the "age" of the digital victim? California is sailing into some very murky legal waters here. 

A separate pair of bills would force the tech giants to start disclosing what data they use to train their models and begin setting "safety guardrails" on the designs of AI models. Those large language models are constantly being updated with every bit of data that the developers can dredge up, including some fairly dubious sources at times. (I once had ChatCPT indicate that one of the sources for an answer was a Reddit forum.) As for installing "guardrails" to prevent the AI from getting out of control, that's a concept that most (though not all) of the major players in the nascent AI industry claim to agree with, but nobody seems quite sure how they could go about it without crashing the model. Others oppose installing any sort of guardrails even if it's possible because it would hamper the development of the technology. The California legislature can't magically make all of these capabilities appear out of nowhere with a swipe of the governor's pen.

We've discussed this here so many times now that I've lost count. There are serious, legitimate concerns over the explosive development of this technology, but we seem to have already tied ourselves into a digital Gordian knot. The AI genie is out of the bottle and new systems are popping up all over the place. Even if we found a way to pump the brakes on all of this in the United States, some of our adversaries are clearly far less squeamish about unleashing this technology on the rest of the world. I won't fault California for recognizing that there's a problem, but the ship may have already sailed in terms of doing anything about it.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
David Strom 11:20 AM | November 21, 2024
Advertisement
Ed Morrissey 10:00 PM | November 20, 2024
Advertisement
Advertisement