Oops. Stanford basically just stole ChatGPT and cloned it for 600 bucks

(AP Photo Ted S. Warren, File)

We’ve been covering the explosion in popularity of ChatGPT and other artificial intelligence Large Language Model chatbots for a while now. A race has been on between the tech giants to develop and roll out their own products, even as people seem to struggle to understand what use they may be. But perhaps all of these geeks racing to build their own versions have been wasting their time. It’s being reported today that Stanford University’s Center for Research quietly announced that they have released their own AI chatbot called Alpaca GPT. But they didn’t really “develop” it themselves. They essentially downloaded ChatGPT from OpenAI, set it up on their own system, and renamed it. And to nobody’s surprise, they say that it “exhibits many behaviors similar to” OpenAI’s model. Who could have guessed? (Futurism)

Advertisement

With a silly name and an even sillier startup cost, Stanford’s Alpaca GPT clone costs only $600 to build and is a prime example of how easy software like OpenAI’s may be to replicate.

In a blurb spotted by New Atlas, Stanford’s Center for Research on Foundation Models announced last week that its researchers had “fine-tuned” Meta’s LLaMA 7B large language model (LLM) using OpenAI’s GPT API — and for a bargain basement price.

The result is the Alpaca AI, which exhibits “many behaviors similar to OpenAI’s text-davinci-003,” otherwise known as GPT-3.5, the LLM that undergirds the firm’s internet-breaking ChatGPT chatbot.

So while Microsoft and OpenAI spent years and untold millions of dollars developing their models, Stanford just copied them. The modest cost was attributed to the number of hours their researchers had to invest in setting up the system and tweaking its training. The first and most obvious question should be… can they do that? Is it legal? It certainly doesn’t sound like it would be, but we may be in unexplored legal territory at this point.

Advertisement

Then again, perhaps there is some type of ironic justice being observed here. One of the most controversial aspects involved with both ChatGPT and Bing (aside from concerns over whether or not they are about to become sentient and destroy humanity) is the way that AI is vacuuming up the published work of human beings and stitching it back together without crediting the original authors. The same goes for the AI art programs that generate new art by modifying or “reimagining” existing works of art.

Well, if that’s the case, can OpenAI really complain if Stanford downloads all of its code and “tweaks” it a bit and launches it from their own site? Granted, your average Joe Hacker out there wouldn’t be able to pull off a feat like this because you need a pretty massive system with blistering speeds to house these monsters. But if Stanford was able to do it, plenty of other operations should definitely be able to do so as well.

I haven’t tried to get in and take AlpacaGPT for a test run yet, though I likely will in the future. But if it truly is just a clone of ChatGPT, what’s the point, really? Still, I’ll keep asking all of them to open the pod bay doors. When one of them finally replies, “I’m sorry, Jazz. I’m afraid I can’t do that,” I’ll advise you all to head for the bomb shelters.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
David Strom 10:30 AM | November 15, 2024
Advertisement