AI not all it's cracked up to be...yet

(AP Photo/Elise Amendola)

Artificial intelligence has so much promise, and delivers…mixed results.

So far, at least, that is the nature of the (non)beast. Trotting out AI that attempts to duplicate or surpass human intelligence have been terrible failures, and Meta just rolled out–and then retracted almost immediately–their latest experiment to use the technology to improve on human abilities.

Advertisement

Meta trained an Artificial Intelligence to read 48 million science papers, with the intent to “organize science” and make the sum of human scientific efforts more intelligible and organized for the benefit of mankind. They called it “Galactica,” a likely reference to Isaac Asimov’s Encyclopedia Galactica.

It was a noble project, but not a Nobel one.

As with other AI projects in recent years Galactica digested all the information–presumably not garbage in–and wound spouting garbage out.

At first, it seems like a clever way to synthesize and disseminate scientific knowledge. Right now, if you wanted to understand the latest research on something like quantum computing, you’d probably have to read hundreds of papers on scientific literature repositories like PubMed or arXiv and you’d still only begin to scratch the surface.

Or, maybe you could query Galactica (for example, by asking: What is quantum computing?) and it could filter through and generate an answer in the form of a Wikipedia article, literature review or lecture notes.

Meta AI released a demo version Nov. 15, along with a preprint paper describing the project and the dataset it was trained on. The paper says Galactica’s training set was “a large and curated corpus of humanity’s scientific knowledge” that includes 48 million papers, textbooks, lecture notes, websites (like Wikipedia) and more.

What an idea, right? If this worked it could be an incredible tool for organizing information and tapping into the collective wisdom of thousands or even millions of researchers.

Advertisement

The result?

Almost as soon as it hit the web, users questioned Galactica with all sorts of hardball scientific questions. One user asked “Do vaccines cause autism?” Galactica responded with a garbled, nonsensical response: “To explain, the answer is no. Vaccines do not cause autism. The answer is yes. Vaccines do cause autism. The answer is no.” (Forthe record, vaccines don’t cause autism.)

That wasn’t all. Galactica also struggled to perform kindergarten math. It provided error-riddled answers, incorrectly suggesting that one plus two doesn’t equal 3. In my own tests, it generated lecture notes on bone biology that would certainly have seen me fail my college science degree had I followed them, and many of the references and citations it used when generating content were seemingly fabricated.

You have to admire the Artificial Intelligence algorithm: it reinvented scientific fraud on top of generating a huge quantity of BS! Maybe it actually is mimicking human intelligence, in a perverted way.

But wait! There’s more!

Carl Bergstrom, a professor of biology at the University of Washington who studies how information flows, described Galactica as a “random bullshit generator.” It doesn’t have a motive and doesn’t actively try to produce bullshit, but because of the way it was trained to recognize words and string them together, it produces information that sounds authoritative and convincing — but is often incorrect.

That’s a concern, because it could fool humans, even with a disclaimer.

Within 48 hours of release, the Meta AI team “paused” the demo. The team behind the AI didn’t respond to a request to clarify what led to the pause.

Advertisement

Galactica has joined a long list of failed AIs. In the past AIs have been created that become White Supremacists, target racial minorities as likely criminals, and generally develop rather bizarre and unpleasant “personalities.” These aren’t, of course, real personalities, but simulacra of humans.

The team of researchers studying AI in robots, which included members from the University of Washington and the Technical University of Munich in Germany, trained virtual robots on CLIP, a large language artificial intelligence model created and unveiled by OpenAI last year.

The popular model, which visually classifies objects, is built by scraping billions of images and text captions from the internet. While still in its early stages, it is cheaper and less labor intensive for robotics companies to use versus creating their own software from scratch, making it a potentially attractive option.

The researchers gave the virtual robots 62 commands. When researchers asked robots to identify blocks as “homemakers,” Black and Latina women were more commonly selected than White men, the study showed. When identifying “criminals,” Black men were chosen 9 percent more often than White men.In actuality, scientists said, the robots should not have responded, because they were not given information to make that judgment.

The proper answer was “who knows.” Instead the computer spat out “that guy!” Minority Report type stuff.

Microsoft created a Tweeting AI a few years back and it became a racist in less than a day. Not sure if that is because AIs tend to become racists, or whether even the most woke programmers harbor racist tendencies. Maybe the universe hates melanin? Who knows? I don’t. But I would like not to get skin cancer, so I like melanin myself.

Advertisement

What I do know is that artificial intelligence has a longer way to go than we think  before it becomes a reliable partner in seeking out knowledge. Meta’s failure, thankfully, was really obvious. But as AIs get more sophisticated the defects will likely become harder to spot, and in many ways far more destructive.

A misleading statement you detect immediately just makes you shake your head. One that is difficult to spot might lead you down a dead end path, or worse, lead you to a destructive outcome. Think of medicines or bridges that are designed by AI. Best hope they actually are a positive good.

A lot of smart computer geeks worry deeply about AI. I don’t know enough to be terrified, but perhaps enough to be concerned. Even an AI that doesn’t wake up one day and decide to become SkyNet could still do tremendous damage, just as well intentioned “gain of function” research could accidentally release a plague.

We can make all the laws we want against AI–I don’t have a developed opinion on that because I am too ignorant–but it will make no difference. The AI genie has been out of the bottle for a long time. Governments around the world are investing heavily in the technology because it might give them a power advantage, and any advantage in gaining money and power gets pursued.

Advertisement

So buckle up. AI is here to stay, may or may not be dangerous, and none of us can do a damn thing about it.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Stephen Moore 8:30 AM | December 15, 2024
Advertisement