Character.ai to ban teenagers from speaking to its AI chatbots | EUROtoday
Chatbot web site Character.ai is reducing off youngsters from having conversations with digital characters, after dealing with intense criticism over the sorts of interactions younger folks have been having with on-line companions.
The platform, based in 2021, is utilized by thousands and thousands to speak to chatbots powered by synthetic intelligence (AI).
But it’s dealing with a number of lawsuits within the US from mother and father, together with one over the demise of a teen, with some branding it a “clear and present danger” to younger folks.
Now, Character.ai says from 25 November under-18s will solely have the ability to generate content material resembling movies with their characters, reasonably than discuss to them as they will at present.
Online security campaigners have welcomed the transfer however stated the function ought to by no means have been out there to kids within the first place.
Character AI stated it was making the modifications after “reports and feedback from regulators, safety experts, and parents”, which have highlighted considerations about its chatbots’ interactions with teenagers.
Experts have beforehand warned the potential for AI chatbots to make issues up, be overly-encouraging, and feign empathy can pose dangers to younger and susceptible folks.
“Today’s announcement is a continuation of our general belief that we need to keep building the safest AI platform on the planet for entertainment purposes,” Character.ai boss Karandeep Anand informed BBC News.
He stated AI security was “a moving target” however one thing the corporate had taken an “aggressive” method to, with parental controls and guardrails.
Online security group Internet Matters welcomed the announcement, however it stated security measures ought to have been inbuilt from the beginning.
“Our own research shows that children are exposed to harmful content and put at risk when engaging with AI, including AI chatbots,” it stated.
Character.ai has been criticised prior to now for internet hosting probably dangerous or offensive chatbots that kids might discuss to.
Avatars impersonating British youngsters Brianna Ghey, who was murdered in 2023, and Molly Russell, who took her life on the age of 14 after viewing suicide materials on-line, have been found on the location in 2024 earlier than being taken down.
Later, in 2025, the Bureau of Investigative Journalism (TBIJ) discovered a chatbot primarily based on paedophile Jeffrey Epstein which had logged greater than 3,000 chats with customers.
The outlet reported the “Bestie Epstein” avatar continued to flirt with its reporter after they stated they have been a toddler. It was one among a number of bots flagged by TBIJ that have been subsequently taken down by Character.ai.
The Molly Rose Foundation – which was arrange in reminiscence of Molly Russell – questioned the platform’s motivations.
“Yet again it has taken sustained pressure from the media and politicians to make a tech firm do the right thing, and it appears that Character AI is choosing to act now before regulators make them,” stated Andy Burrows, its chief govt.
Mr Anand stated the corporate’s new focus was on offering “even deeper gameplay [and] role-play storytelling” options for teenagers – including these can be “far safer than what they might be able to do with an open-ended bot”.
New age verification strategies may even are available, and the corporate will fund a brand new AI security analysis lab.
Social media skilled Matt Navarra stated it was a “wake-up call” for the AI business, which is shifting “from permissionless innovation to post-crisis regulation”.
“When a platform that builds a teen experience still then pulls the plug, it’s saying that filtered chats aren’t enough when the tech’s emotional pull is strong,” he informed BBC News.
“This isn’t about content slips. It’s about how AI bots mimic real relationships and blur the lines for young users,” he added.
Mr Navarra additionally stated the large problem for Character.ai will likely be to create an attractive AI platform which teenagers nonetheless need to use, reasonably than transfer to “less safer alternatives”.
Meanwhile Dr Nomisha Kurian, who has researched AI security, stated it was “a sensible move” to limit teenagers utilizing chatbots.
“It helps to separate creative play from more personal, emotionally sensitive exchanges,” she stated.
“This is so important for young users still learning to navigate emotional and digital boundaries.
“Character.ai’s new measures would possibly replicate a maturing part within the AI business – little one security is more and more being recognised as an pressing precedence for accountable innovation.”
https://www.bbc.com/news/articles/cq837y3v9y1o?at_medium=RSS&at_campaign=rss