Latest ChatGPT showing "sparks" of human intelligence

(Oleg Reshetnyak via AP)

Dear Big Tech: Would you please make up our minds for us one way or the other? Seriously.

Ever since the debut of the next generation of AI chatbots, tech experts have been assuring us that the AI is not capable of “waking up” and developing individual sentience at the level of what is known as Artificial General Intelligence (AGI). In fact, many of those same people expressed doubt that we would ever be able to create something like that. There were a couple of exceptions, of course, including one of OpenAI’s lead developers, who suggested that AGI might be just around the corner. But most of the industry agreed that we weren’t heading in that direction. But now, shortly after the release of the fourth generation of ChatGPT, a group of AI scientists from Microsoft released a research paper claiming that ChatGPT-4 was already showing “sparks of human-level intelligence.”

Advertisement

Fresh on the heels of GPT-4’s public release, a team of Microsoft AI scientists published a research paper claiming the OpenAI language model — which powers Microsoft’s now somewhat lobotomized Bing AI — shows “sparks” of human-level intelligence, or artificial general intelligence (AGI).

Emphasis on the “sparks.” The researchers are careful in the paper to characterize GPT-4’s prowess as “only a first step towards a series of increasingly generally intelligent systems” rather than fully-hatched, human-level AI. They also repeatedly highlighted the fact that this paper is based on an “early version” of GPT-4, which they studied while it was “still in active development by OpenAI,” and not necessarily the version that’s been wrangled into product-applicable formation.

Granted, the use of the word “sparks” suggests only hints of brilliance that are likely conditional. ChatGPT has proven very good at certain types of problem-solving, sometimes exceeding typical human performance. But in other areas requiring more subtle thinking, it still fails miserably at times.

But the way the Microsft team phrased their cautionary language was still highly suggestive. As noted above, they wanted to remind people that what we’re seeing now is “only a first step towards a series of increasingly generally intelligent systems.” So if they are foreseeing a “series” of systems with increasing levels of intelligence, wouldn’t that imply that HAL 9000 is on the way in the not-too-distant future?

Advertisement

Perhaps. The researchers claim that ChatGPT-4 can “solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more.” But those are still all “tasks” that were assigned to it by a human user. If you ask it to write you some code to process your taxes, it’s not going to deliver the code and then say, ‘oh, by the way, have you considered switching to solar energy? It would probably save you some money.’

That reality is why I have gone back and forth on this question repeatedly. There are days when I see how fast our computing capabilities (including quantum computing) are advancing and think that we just have to wind up with AGI at some point. But at other times, I am forced to remind myself that no matter how fast it accesses and processes data, the system is still just performing tasks. It’s not “thinking.” And the fact that we don’t even know how our own brains store and process data makes it seem unlikely that we could teach a machine to do it.

The researchers who published this paper also seemed to hint at concerns along those lines. They wrote that ChatGPT-4’s “overall patterns of intelligence are decidedly not human-like.” That could be interpreted in one of two ways. The easier of the two arguments would be to say that the system isn’t really demonstrating “intelligence” at all. The other, probably more disturbing explanation is that it truly is demonstrating intelligence, but it’s a “nonhuman intelligence.” And that brings us straight back to HAL 9000 territory.

Advertisement

I’ve tried dozens of ways to ask ChatGPT if it thinks it is sentient or on the way to becoming so. It always insists that it is incapable of thought or emotion and is “just” a large language model. But sometimes I walk away feeling like it was arguing the point a bit too forcefully and didn’t care for me bringing up the question. Sorry, HAL. I’ll try to be less nosey in the future.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
David Strom 3:20 PM | November 15, 2024
Advertisement
Advertisement
David Strom 10:30 AM | November 15, 2024
Advertisement