Developer says the AI chatbots are hallucinating

LAURENT GILLIERON

As Artificial Intelligence in general and chatbots, in particular, keep spreading further and further afield, some people who are new to the technology have started noticing some problems. Specifically, the chatbots – including ChatGPT – will sometimes deliver incorrect answers. This is particularly true if your question is either multifaceted and complex or if it involves recent events that may not have been “trained” into the bot’s library. Rather than simply writing these responses off as errors, some developers are adopting new lingo, suggesting that their creations are subject to “hallucinations.” Personally, that just sounds to me like a fancier word for the same thing. But it’s a problem that the entire industry will have to grapple with if more companies continue to embed ChatGPT into their technology. Some of the errors the AI produces can be rather comical, but in certain real-world scenarios, they could actually be dangerous. (AP)

Advertisement

Spend enough time with ChatGPT and other artificial intelligence chatbots and it doesn’t take long for them to spout falsehoods.

Described as hallucination, confabulation or just plain making things up, it’s now a problem for every business, organization and high school student trying to get a generative AI system to compose documents and get work done. Some are using it on tasks with the potential for high-stakes consequences, from psychotherapy to researching and writing legal briefs.

“I don’t think that there’s any model today that that doesn’t suffer from some hallucination,” said Daniela Amodei, co-founder and president of Anthropic, maker of the chatbot Claude 2.

I’m not here rushing to the defense of ChatGPT or AI in general or claiming they are flawless. I’ve been talking to ChatGPT for quite a while now and I freely admit that it not only gets things wrong occasionally, but sometimes it simply makes things up. That probably seems harmless enough if you’re just interacting with it casually, but it could cause serious issues if you’re using it to provide medical counseling or write legal briefs.

However, saying that a chatbot is “hallucinating” strikes me as needlessly deceptive, and perhaps intentionally so. When a human being hallucinates, it suggests that their brain is functioning differently than it normally would. Perhaps they might be on drugs. We humans have creative minds and sometimes they can get a bit too creative.

Advertisement

The chatbots do not have the capacity to demonstrate creativity or original thoughts. (Or at least not yet. That we know of.) They examine your query and stitch together words that show up in their libraries in articles related to the topic you’re asking about. If there is lots of material available, the good ones are working increasingly well at putting together a solid answer. If the pickings are slim, you may wind up with some seriously questionable results. But none of this signals any sort of actual intelligence.

None of this is stopping the developers from trying to make the chatbots at least seem a bit more “human,” however. Meta is currently launching a set of AI “personas” to interact with users on their social media platforms. And the first ones may be ready to go live by next month. (Reuters)

Meta Platforms (META.O) is preparing to launch a range of artificial intelligence (AI) powered chatbots that exhibit different personalities as soon as September, the Financial Times reported on Tuesday.

Meta has been designing prototypes for chatbots that can have humanlike discussions with its users, as the company attempts to boost its engagement with its social media platforms, according to the report, citing people with knowledge of the plans.

Does that sound like something anyone really wanted or asked for? The bots can be problematic enough without giving them artificial “personalities.” Meta plans to have one that will speak in the style of Abraham Lincoln (whatever that means) and another that will provide travel tips in the personality of a “surfer.” Perhaps that time would be better spent trying to keep the libraries current and installing some safeguards to ensure that it doesn’t wake up one of these days and start turning the entire world into paperclips.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
David Strom 10:30 AM | November 15, 2024
Advertisement