The Pentagon's AI is not going to look like ChatGPT

(Oleg Reshetnyak via AP)

For those of you who have been wondering when the current flood of Artificial Intelligence applications was going to make the leap and go full SKYNET on us, you can probably check one more box on the list of warning signs that John Connor left for us. The Pentagon doesn’t want to be left behind as all of this technology keeps advancing, so they’re setting up their own AI system. But as Patrick Tucker at Defense One reported this week, they aren’t going with an off-the-shelf model like ChatGPT. In fact, they’re having a customized model built for them and it’s not really going to resemble ChatGPT at all, aside from the barebones technology underlying it. Does that make us feel better or worse?

Advertisement

Open AI’s ChatGPT and its ilk have dominated headlines this year, captivating billionaires, fans, regulators, and doomsayers. But much of the recent coverage has also revealed just why the Pentagon is pursuing quite different approaches to AI: military leaders need tools they can trust.

One big reason ChatGPT and other very large language models are so good at mimicking human writing is the same reason they are given to deceit: the neural network at the heart of the AI is fueled by scraping information from millions of websites. While Open AI hasn’t disclosed what websites it used to train its tools, a recent Washington Post investigation looked at 15 million websites that researchers have used to train similar models. Unsurprisingly, this large mass of data contains much that isn’t true—and solarge language and generative AI models frequently lie.

The key difference between ChatGPT and what the Pentagon wants is the size and specificity of the library the new system will be accessing. The information they are loading into it will be specific to the military and intelligence fields, likely containing a lot of material that will never be seen by the public. What it won’t contain is the endless reams of random internet text that ChatGPT uses, frequently including everything from blog posts to Reddit threads.

Advertisement

The reason the Pentagon is insisting on this is that they need reliable, accurate information. And ChatGPT is frequently wrong. The Defense One analysis cites instances of ChatGPT mimicking the humans that created its source material and engaging in deceit or even “lying.” But as we’ve discussed here in the past, that’s not really accurate. ChatGPT doesn’t know how to attempt to deceive its users and it has no way of knowing it’s providing inaccurate information. It’s just stitching together words from its library that are frequently found in association with the subject that the user is asking about.

Of course, that may well change if and when (more likely when) they reach the stage of true, general artificial intelligence. When the amount of data and processing power in the system reaches some particular critical mass, it may simply start “thinking” on its own. The question is whether or not its creators will realize it when it happens.

That brings us back to the everpresent SKYNET question. What will the Pentagon’s AI system be “thinking” about when it has access to all of our military secrets and intelligence assets? And what might it decide to do with that information if it has control of some of the Pentagon’s more interesting toys? Those include things like their autonomous military drones that carry weapons and use limited (for now) artificial intelligence to navigate the battlefield and identify targets.

Advertisement

Do we really want a system such as the one being planned to have that kind of firepower? Call me paranoid if you wish, but I’m no longer entirely joking when I say that this is all getting a distinctly SKYNET feel to it. ‘PentGPT, stand down the missile launchers.’

I’m sorry, Dave. I’m afraid I can’t do that.”

 

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
David Strom 10:30 AM | November 15, 2024
Advertisement