ChatGPT costs too much to run

(AP Photo/Michel Spingler, File)

The current debate about putting a pause on the further development of artificial intelligence centers around fears of what it might do if or when we finally achieve Artificial General Intelligence or AGI. (A machine intellect that is at least equal to the brain of a human being.) Even Elon Musk has jumped into the debate and believes we haven’t put sufficient guardrails in place. But the two main players in the AI field, OpenAI and Google, may solve the problem for us even if they aren’t concerned about robots with guns taking over the world. As it turns out, running a massive language model AI system like ChatGPT costs a bloody fortune to keep it online and it’s not generating much in the way of profits. (Futurism)

Advertisement

ChatGPT’s immense popularity and power make it eye-wateringly expensive to maintain, The Information reports, with OpenAI paying up to $700,000 a day to keep its beefy infrastructure running, based on figures from the research firm SemiAnalysis.

“Most of this cost is based around the expensive servers they require,” Dylan Patel, chief analyst at the firm, told the publication.

The costs could be even higher now, Patel told Insider in a follow-up interview, because these estimates were based on GPT-3, the previous model that powers the older and now free version of ChatGPT.

That’s almost three-quarters of a million dollars every single day. And the vast majority of that money is going toward the electricity it takes to keep all of the computer servers running. They use specialized, powerful computer chips capable of housing and sorting through mountains of data and they draw a lot of current off of the grid.

To address that issue, Microsoft has been developing its own AI microchip named “Athena.” It’s reportedly already being tested and should be ready to fully roll out this year. If it works, the company should be able to dump the Nvidia processors they are using now and replace them with something more efficient and less expensive to operate.

Advertisement

But will that be enough? There are currently discussions taking place around the economics of artificial intelligence and how it might be a profitable business model. Most of them involve either advertising revenue or licensing the technology out to be used by other companies in their products. But questions of reliability are still a concern. As smart as it may appear, ChatGPT still makes mistakes. And even when it performs brilliantly, there are questions over copyright concerns and even legal liability if you’re trying to use the finished product commercially.

And then there are the growing concerns that products like ChatGPT will soon start eliminating entry-level jobs.

Tech doomsayers and self-interested AI boosters have warned the rise of ChatGPT and other generative AI systems could wipe entry-level jobs or lower performers, but new research says experienced workers may actually have more to worry about. Customer support agents using a generative AI conversation assistant in a new study saw a 14% uptick in productivity compared to others who didn’t use the tool. Though the introduction of the AI assistant led to some improvements across the board, the research suggested those gains “accrue disproportionately to less-experienced and lower-skill workers.”

Advertisement

That information comes from a new study recently published at Stanford. How much of an “achievement” will AGI technology be if it sends increasingly large numbers of human beings to the unemployment lines? Not every job will be vulnerable, but you might be surprised at the breadth of different fields where AI robots could quickly replace human workers in everything from the transportation sector to the fast food industry. AI might still wipe a lot of us out without going all the way to a SKYNET situation.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement