Character. AI’s initiative to restrict minors from using its chatbots is a decision that could set a positive example for other artificial intelligence platforms to follow. With such a vast number of students already relying on AI, introducing AI that acts as a companion will continue to worsen young people’s mental health. Prolonged exposure to chatbots could lead to people distancing themselves from real human interaction.
Character.AI is a platform where humans can chat with AI bots with customizable personalities and sift through AI made videos.
Recently, chatbot companies have been under fire from legislators and parents for the risks they pose to children’s mental health. Because of this pressure, Character.AI is pushing out stricter regulations, which include restricting the amount of time children can be on the app, the type of services they can utilize and parental control features.
The platform is partnering with Persona to roll out an age assurance function. It is additionally enforcing a two-hour daily limit policy for users under 18. Following Character. AI’s footsteps, Rival Meta, another chatbot platform, introduced a feature where parents can manage how their children are interacting with the characters. The regulations are expected to go into effect on Nov. 25.
Character.AI lets users create bots that can eventually become platonic or romantic companions. When children start idolizing non-existent figures, they expect the outside world to imitate those figures. This type of expectation leads to disappointment, leading to a greater reliance on the same chatbots that established those expectations. This cycle of disappointment and emotional reliance on a “figure” that exists solely on logic and human innovation cannot provide support the same way a human can.
The case of Sewell Setzer III is a prime example of how being dependent on AI for emotional support could lead to severe consequences. Setzer was a 14-year-old who saw the chatbot as not as a computer program but a companion, a trusted confidant.
Unfortunately for Setzer, according to the complaint filed in Florida’s federal court, “The platform did not adequately respond when Setzer began expressing thoughts of self-harm to the bot…” He ended up committing suicide.
Following this incident, Character.AI implemented a new feature where if the app detects that the user has any self-harming or suicidal ideation, a pop-up message would direct users to the National Suicide Prevention Lifeline.
While the regulations being implemented are receiving positive feedback, they can also lead to unforeseen consequences. Dr. Nina Vasan, director of a mental health innovation lab at Stanford University, told The New York Times that suddenly removing access to chatbots could be detrimental to users who have already established a reliance on those chatbots.
While the new solutions are feasible, young people tend to find ways around the rules. If age restrictions are implemented, kids will find a way to fake their age, just like they do with so many other games. They could use a fake ID or even use their family members’ account to keep chatting with the bots.
Children will always find a way around the rules, but the fact that Character.AI is implementing regulations to mitigate the usage of chatbots for companionship purposes is a sign of tech companies taking accountability for their creations.
