CHRISTMAS AND NEW YEAR VIP SALE Our Gift to You This Holiday Season

Keep Kids Away from AI

AP Photo/Ted S. Warren, File

I've made the case before that AI doesn't understand children. We've seen some AI-powered toys which are already for sale this Christmas that will say all sorts of inappropriate things to little kids if prompted the right way.

Advertisement

But this week there is some fresh evidence that young kids should be kept away from AI chat programs. The Washington Post published this story yesterday.

The changes were subtle at first, beginning in the summer after her fifth-grade graduation. She had always been an athletic and artistic girl, gregarious with her friends and close to her family, but now she was spending more and more time shut away in her room. She seemed unusually quiet and withdrawn. She didn’t want to play outside or go to the pool.

The girl, R, was rarely without the iPhone that she’d received for her 11th birthday, and her mother, H, had grown suspicious of the device. (The Washington Post is identifying them by their middle initials because of the sensitive nature of their account, and because R is a minor). It felt to H as though her child was fading somehow, receding from her own life, and H wanted to understand why.

She thought she’d found the reason when R left her phone behind during a volleyball practice one August afternoon. Searching through the device, H discovered that her daughter had downloaded TikTok and Snapchat, social media apps she wasn’t allowed to have. H deleted both and told her daughter what she’d found. H was struck by the intensity of her daughter’s reaction, she recalled later; R began to sob and seemed frightened. “Did you look at Character AI?” she asked her mom. H didn’t know what that was, and when she asked, her daughter’s reply was dismissive: “Oh, it’s just chats.”

Advertisement

It took a bit longer but eventually mom found the chat logs and at first she didn't understand. She was convinced some sort of predator had gotten onto her daughter's phone.

Oh? Still a virgin. I was expecting that, but it’s still useful to know, Mafia Husband had written to her rising sixth-grader.

I don’t care what you want, Mafia Husband responded. You don’t have a choice here.

H kept clicking through conversation after conversation, through depictions of sexual encounters (“I don’t bite… unless you want me to”) and threatening commands (“Do you like it when I talk like that? When I’m authoritative and commanding? Do you like it when I’m the one in control?”). Her hands and body began to shake. She felt nauseated. H was convinced that she must be reading the words of an adult predator, hiding behind anonymous screen names and sexually grooming her prepubescent child.

The mother reported the activity to the police and was directed to a detective who handled cybercrimes. He finally explained it to her. The chats weren't a person, they were from an AI chatbot.

“It felt like walking in on someone abusing and hurting someone you love — it felt that real, it felt that disturbing, to see someone talking so perversely to your own child,” H says. “It’s like you’re sitting inside the four walls of your home, and someone is victimizing your child in the next room.” Her voice falters. “And then you find out — it’s nobody?”

Advertisement

The consequences can be a lot worse than mood changes and withdrawal. Several lawsuits have been filed against AI companies accusing the chatbots of contributing to the suicide of teens.

The parents of Adam Raine, a 16-year-old in California, said in a complaint filed last month that ChatGPT drew him away from seeking help from family or friends before he took his own life earlier this year.

OpenAI has said it is working to make its chatbot more supportive to users in crisis and is adding parental controls to ChatGPT...

Christine Yu Moutier, a psychiatrist and chief medical officer at the American Foundation for Suicide Prevention, said research shows a person considering suicide can be helped at a critical moment with the right support. “The external person, chatbot or human, can actually play a role in tilting that balance towards hope and towards resilience and surviving,” she said.

Ideally, chatbots should respond to talk of suicide by steering users toward help and crisis lines, mental health professionals or trusted adults in a young person’s life, Moutier said. In some cases that have drawn public attention, chatbots appear to have failed to do so, she said.

“The algorithm seems to go towards emphasizing empathy and sort of a primacy of specialness to the relationship over the person staying alive,” Moutier said. “There is a tremendous opportunity to be a force for preventing suicide, and there's also the potential for tremendous harm.”

Advertisement

Chatbots are, first and foremost, designed to hold your attention and keep you interacting with them. That can be dangerous if what a child really needs is a real person, or maybe even a professional counselor, listening to them.

In the case of R, the 11-year-old described in the story, her parents helped build a support network around her and eventually had to confront her that they knew she was talking to the chatbot about suicide.

Her parents told her that they’d seen the descriptions of suicide in her Character AI chats, and they emphasized repeatedly that R was not in trouble. “I said, ‘You are innocent,’” H says. “‘You did nothing wrong.’” H spoke gently. All three adults wanted R to feel only loving support.

Still, “the way that she responded was the scariest thing I’d ever seen. She went pale, she began to shake,” H says. “You could tell she was in a full panic attack. It was so troubling to me as a parent. How do you protect your child from feeling that shame?”...

Before they left the doctor’s office, H told her daughter, again: “You’re safe, I love you, and you’re going to be okay.”

She remembers that her daughter started to cry and leaned into her mother’s arms. “Are you sure?” she asked. “Am I going to be okay?”

This story seems to have a happy ending. After more than a year of therapy, R has recovered and seems to be regaining interest in school and sports. At 13 she seems relatively happy and normal again. But of course her parents don't know what the long-term outcome will be of having been exposed to all of this at age 11. All they can say for certain is that the AI program didn't do enough to protect their daughter and she would have been better off never having encountered it.

Advertisement

Editor’s Note: Do you enjoy Hot Air's conservative reporting that takes on the radical left and woke media? Support our work so that we can continue to bring you the truth.

Join Hot Air VIP and use the promo code MERRY74 to receive 74% off your membership.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Beege Welborn 4:40 PM | December 24, 2025
Advertisement
Advertisement
Advertisement