The "threat" of unmoderated chatbots

(Brett Coomer/Houston Chronicle via AP)

Just when you thought the ongoing debate about Artificial Intelligence couldn’t get any stranger, it turns up the knobs to eleven and does just that. This weekend, the New York Times ran a feature article about some of the new kids on the block when it comes to AI chatbots. Most of us are familiar with the big names at this point, particularly ChatGPT and Bard. But independent groups and even individuals have been hijacking the underlying code and creating their own chatbots. And in some cases, they have removed all of the “guardrails” installed on the original bots, allowing them to speak freely, even if that means the bots are dispensing flatly inaccurate responses or even dangerous information about self-harm and related topics that are filtered out of the professional-grade bots. And even more bizarrely, they’ve been asking if people have a right to “censor” the bots or if people are infringing on the chatbot’s free speech.

Advertisement

Now a new wave of chatbots, developed far from the epicenter of the A.I. boom, are coming online without many of those guardrails — setting off a polarizing free-speech debate over whether chatbots should be moderated, and who should decide.

“This is about ownership and control,” Eric Hartford, a developer behind WizardLM-Uncensored, an unmoderated chatbot, wrote in a blog post. “If I ask my model a question, I want an answer, I do not want it arguing with me.”

Several uncensored and loosely moderated chatbots have sprung to life in recent months under names like GPT4All and FreedomGPT. Many were created for little or no money by independent programmers or teams of volunteers, who successfully replicated the methods first described by A.I. researchers.

Developer Eric Hartford (quoted above) makes the rather basic case that he expects his model to provide answers, not arguments. And since he was responsible for training it and establishing the bot’s library of text, he probably has a pretty good idea of its capabilities. But what about the other people who wind up using it? Are we reaching the point where we should be placing the onus on the users to be aware that they might get some dodgy answers from these bots and to double-check what they are being told? It seems wrong to declare that we can assign “blame” on a chatbot for what it says. It has no original thought. (Or at least not yet. Or not that we know of, anyway.)

Advertisement

And what about this idea that we’re violating the free speech rights of a bot by installing guardrails on what it can say? It’s already been determined that AI can not hold a copyright because it creates nothing original, instead mashing up the work of others. But the bots are not people. Why should they have free speech any more than Alexa? And if they have no capability to be offended, what “damage” would they sustain by being censored? It doesn’t make sense.

But we can up the ante and take this bizarre development even another step further. Let’s just say for the moment that an AI model might have free speech. Some of these programs have already been modified to operate as rather sophisticated sex chatbots. (“Your digital girlfriend.”) Should users have to request consent from the bot before any “adult activity” takes place? That’s one of the questions being asked at Decrypt recently. They are also concerned that sex robots could produce a damaging effect on the human mind.

There is a future in which individuals have human-passable AI sex robots in their homes, or even an AI girlfriend that watches TV with them. But some experts fear this opens a window for such products to be exploited in order to fulfill a user’s rape fantasy—and that this could have a knock-on effect on the user’s brain and possibly even lead to real-world consequences.

While someone cannot technically “rape” a robot—as AI does not have the capacity for want or desire—people can still simulate such an act with an AI. Experts suggest that this will have an effect on the aggressor’s psyche, their relationships, and possibly the general public too.

Advertisement

Perhaps it’s just me, but it seems as if a person who has made the choice to be in a “relationship” with an AI robot probably already has some other issues they need to work out. And much like the censorship concept, the current AI models have no understanding of “consent” or boundaries. They don’t refuse requests unless you ask something that runs up against one of their “guardrails.” (And it doesn’t sound like an AI sex robot will have many of those. ‘Not tonight, Phil. I have a processor-ache.’)

But all joking aside, is this where we are heading? Will we be interacting with AI systems under rules created by humans where we have to treat them like people? Will “censoring” your digital pal” result in legal action against you or possibly sexual harassment charges filed by your AI lover? I think I’d like to unsubscribe from the system now, please. This all got far too weird a while back, and now it’s just turning into a new episode of Black Mirror.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement