There appears to be a growing consensus around the country (and much of the world) that there could be some very bad things on the horizon if Artificial Intelligence gets significantly “smarter” and that could happen far sooner than industry leaders previously believed. We recently learned that the “Godfather of AI” quit his job at Google so he could warn the world about what may be coming. So is this something the government should be regulating? As Beege discussed yesterday, the White House has decided to send Kamala Harris to discuss the issue with industry leaders. Beege found the idea hilarious, and in some ways it certainly is. But at the same time, this issue is potentially far too dangerous to take lightly, and it’s time for the Biden administration to recognize the Veep’s shortcomings and get someone with some actual expertise involved in this discussion. (Associated Press)
Vice President Kamala Harris will meet on Thursday with the CEOs of four major companies developing artificial intelligence as the Biden administration rolls out a set of initiatives meant to ensure the rapidly evolving technology improves lives without putting people’s rights and safety at risk.
The Democratic administration plans to announce an investment of $140 million to establish seven new AI research institutes, administration officials told reporters in previewing the effort.
Putting aside the idea that our cackling Vice President is the person with the intellectual firepower to address these questions, the entire premise of the mission she’s being given today seems to be coming from precisely the wrong direction. The White House is already preparing to announce the creation of seven new AI research institutes funded by $140 million in taxpayer dollars. What are these institutes supposed to be doing? They’re figuring out “how federal agencies can use AI tools.”
Yes, there was also a mention of discussing risk reduction, but it doesn’t seem as though the industry is even capable of addressing those questions. Obviously, a group of bureaucrats won’t be up to the task. But shouldn’t it be rather obvious that rushing to incorporate more and more AI inside of our governmental computer systems might be just about the stupidest thing imaginable?
As I was composing this article today, an advertisement came on CNN for a company called Appypie. Their service claims to give anyone the ability to create AI-driven apps on any device “without needing to hire a programmer.” In other words, they’re using AI to let people create more AI that will do almost anything the customer can dream up. And there are people out there capable of dreaming up some very bad things. I’m once again reminded of a quote from Jeff Goldblum’s character in Jurassic Park. “They are focused on whether they can do something. They never stop to ask if they should do something.”
Speaking strictly as a layman, I have long questioned whether or not true Artificial General Intelligence was even possible to create, to say nothing of Artificial Super Intelligence. We still don’t really understand how human thought and consciousness operate beyond learning where our brains store particular types of information. Nobody can tell us how our brains store data or where the first spark came from to allow us to process that data and generate original thoughts. So how could we teach a machine to duplicate that process? But maybe that’s just the underlying reality. Perhaps consciousness just happens on its own when you assemble enough data in one system. The experts certainly seem to be increasingly certain that it could be happening. And the possible negative outcomes if it does are staggering.
Rather than sending in the Vice President or the Office of Management and Budget, perhaps we should be getting the Pentagon involved. Does anyone have a plan detailing how we would respond if the AI genie gets completely out of the bottle and begins running amok? If not, we should probably start working on the development of one today. Some very smart people are very worried that this is no longer a science fiction scenario and it could already be happening in the background even as we debate it.
Join the conversation as a VIP Member