Meta's new AI model immediately produces racist, inaccurate scientific papers

(AP Photo/Jupiter Systems, Brad Howell)

On Tuesday of last week, Meta AI, the artificial intelligence branch of Facebook’s parent company, Meta, rolled out an exciting announcement for the scientific community. They released a demonstration version of “Galactica.” The new, large language model (LLM) artificial intelligence program was intended to be a tool for the scientific community to use when generating new scientific papers. Unfortunately, some problems cropped up rather quickly when the target audience started putting it through its paces. The new LLM was indeed able to produce what appeared to be some impressive scientific literature. But it also produced a lot of what was described as “realistic-looking nonsense.” And some of the instructions it generated ranged from being blatantly racist to outright dangerous. Within days, Galactica had been pulled down and sent back to the drawing board. (Ars Technica)

Advertisement

On Tuesday, Meta AI unveiled a demo of Galactica, a large language model designed to “store, combine and reason about scientific knowledge.” While intended to accelerate writing scientific literature, adversarial users running tests found it could also generate realistic nonsense. After several days of ethical criticism, Meta took the demo offline, reports MIT Technology Review.

Large language models (LLMs), such as OpenAI’s GPT-3, learn to write text by studying millions of examples and understanding the statistical relationships between words. As a result, they can author convincing-sounding documents, but those works can also be riddled with falsehoods and potentially harmful stereotypes. Some critics call LLMs “stochastic parrots” for their ability to convincingly spit out text without understanding its meaning.

Enter Galactica, an LLM aimed at writing scientific literature. Its authors trained Galactica on “a large and curated corpus of humanity’s scientific knowledge,” including over 48 million papers, textbooks and lecture notes, scientific websites, and encyclopedias.

Here’s one example of a futurist who was able to use Galactica to generate research papers, wiki entries, and scientific instructions for a range of, um… interesting topics.

Advertisement

It’s challenging to get anything as complicated as Galactica 100% reliable on the first try, so some errors were probably to be expected. Wiki entries on the benefits of suicide and of “being white” could easily turn up in the literal tens of millions of textbooks and other scientific documents that were fed into it. But you’d think that there might have been some sort of guardrails installed to prevent it from singing the praises of humans eating crushed glass. (I’d like to see the original paper that came from.)

Michael Black, the Director of the Max Plank Institute for Intelligent Systems, fed a series of requests into Galactica. (You can read the full results in this Twitter thread, assuming Twitter is still working today.) In some responses, Galactica cited lengthy scientific references from known sources, but the reference did not exist. It was made up out of whole cloth.

Black described Galactica as “an interesting research project,” but went on to also describe it as “not useful” and “dangerous.” The way in which it could be dangerous is the system’s ability to produce results that look quite valid and use grammatically correct scientific lingo while being completely and potentially dangerously off the mark. He notes that any papers produced by Galactica could easily slip into published research work in the form of deep fakes that might be very difficult to detect and retract.

This wouldn’t be a new problem, sadly. Going back to 2014, we were seeing reports of respected scientific journals containing highly technical-sounding information that wound up being identified as “gobbledygook.” And most of those papers were generated using earlier versions of computer programs that were designed to do precisely that.

Advertisement

I just hope Galactica isn’t being used to cook up the next round of COVID vaccine boosters. Then again, I probably shouldn’t be giving them ideas.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Stephen Moore 8:30 AM | December 15, 2024
Advertisement
Advertisement