Congressional regulation of AI?

monkeybusinessimages/iStock/Getty Images Plus

I suppose we should have seen this coming, but will it be too little, too late? While everyone is being dazzled by the latest generation of Artificial Intelligence, many others are going into a panic over the potential dangers or at least negative outcomes it might deliver. That includes some of the original developers of the current generation of AI systems. All of this activity has finally attracted the attention of Congress, where some members are following the first instinct of any bureaucrat when confronted with something new. They’re thinking of ways that the government can regulate it. There is currently a bipartisan push to get some sort of regulatory legislation drawn up and it’s being led by New York Democratic Congressman Ritchie Torres. He believes this will need to be done in stages, but as a starting point, he would like to mandate disclaimers on any output of AGI systems in any medium, informing the public with a notice that would say, ‘this output has been generated by artificial intelligence.’ I’m sure that will fix everything. (Axios)

Advertisement

Rep. Ritchie Torres (D-N.Y.) is introducing legislation that would require the products of generative artificial intelligence to be accompanied by a disclaimer, Axios has learned.

Why it matters: AI is improving at a record pace. Experts say disclosure will be crucial to combat fraud or maintain a healthy political discourse.

The big picture: Torres’ bill is the latest in a wave of new legislative efforts to regulate AI as Congress grapples with the emerging technology’s massive potential — both for societal advancement and harm.

In a statement regarding the bill, Torres described the technology as having the potential to be “a weapon of mass disinformation, dislocation, and destruction.” I’ll be the first to agree that it’s certainly going to destroy a lot of jobs. And when it’s used by people with ill intent, it could cause any number of problems. With that in mind, I’ll hold my nose and suggest that perhaps some initial government regulation wouldn’t be completely out of the question.

Even as a first step, however, disclaimers don’t sound particularly bulletproof. Even if people see a disclaimer identifying something as a product of AI, that doesn’t speak to the quality or veracity of the output. And what if people simply ignore the mandate? This generation of AI is already too good to immediately tell the difference in many instances. I interrogate ChatGPT multiple times every week and it frequently generates conversations that, if you pasted them into a text message for me, I likely wouldn’t be able to tell that the bot had written it.

Advertisement

Assuming such regulatory action is possible, how do they plan to find a way to exert any leverage on the industry? The people who are running the companies developing all of these AI systems (with the exception of Elon Musk) have shown absolutely no interest in “slowing down” or installing too many “guardrails” on this technology. If anything, they are speeding up out of fear of being beaten to the forefront of The Next Big Thing.

And let’s stop for a moment and consider who is volunteering to enact these regulations. Are we really going to let the geriatric fossils in the Washington swamp take charge of regulating Artificial General Intelligence? AGI is estimated to have “approximately the same intelligence” as a human being. (Only vastly faster.) But I don’t think they had Biden, Fetterman, or Feinstein in mind when they were setting that bar. Some of the younger members may be a bit more tech-savvy than the rest, but a lot of these people couldn’t log into a Zoom call without an aide setting things up for them. Are these the people who will be asked to grapple with the inner workings of the new Large Language Models?

All I’m saying is that we should probably be prepared for regulatory efforts to fail or at least come up significantly short. And what happens next? Well, things might get worse, but precisely how bad? This recent article from Kari Paul at The Guardian looks at some of the more common theories. Rather than talking about “killer robots” wiping out humanity, she describes “a creeping deterioration of the foundational areas of society.”

Advertisement

“I don’t think the worry is of AI turning evil or AI having some kind of malevolent desire,” said Jessica Newman, director of University of California Berkeley’s Artificial Intelligence Security Initiative.

“The danger is from something much more simple, which is that people may program AI to do harmful things, or we end up causing harm by integrating inherently inaccurate AI systems into more and more domains of society.”

It’s a valid concern. The big tech companies are racing to jam AI into everything they can think of, including search engines and social media content generators. It’s not too hard to see how bad actors could cause serious destabilization, particularly in the political world. The FBI was able to jigger the last presidential election with little more than deception, brute force, and intimidation. Just imagine what ChatGPT could do. And if the “disclaimer” plan doesn’t work out, we probably won’t even have any idea who is doing it.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Jazz Shaw 7:20 PM | March 18, 2024
Advertisement
Advertisement