There is a growing concern over the interaction of young individuals with AI chatbots, prompting Meta to introduce new tools for parents to monitor their children’s chatbot discussions. Some provinces are considering banning the use of AI chatbots for youth.
Parents utilizing Meta’s Teen Accounts supervision feature on Facebook, Instagram, and Messenger can now track the topics and categories their children have engaged with the AI chatbot over the past seven days.
For instance, parents can review discussions under the “health and well-being” category to check if topics like fitness, physical health, or mental health have been addressed.Â
Meta is also working on alerts to notify parents if teenagers attempt to converse about suicide or self-harm with the chatbot.
This development comes as provincial governments move towards restricting the use of AI chatbots. Manitoba announced its intent to prohibit youth from using AI chatbots and social media, while B.C. Attorney General Niki Sharma mentioned the possibility of implementing similar regulations at the provincial level if the federal government does not act on protecting youth from AI chatbots and social media.
Legal Actions Holding AI Creators Responsible
There are mounting concerns that prolonged use of AI chatbots could pose mental health risks, especially among younger users, putting pressure on the tech companies behind these systems.
Recently, families of the victims in a tragic shooting incident in Tumbler Ridge, B.C., filed a lawsuit against OpenAI, alleging that the company did not alert authorities despite being aware of disturbing content shared by the shooter with ChatGPT. OpenAI has stated that it has enhanced its safeguards, particularly in how ChatGPT responds to distress signals.
Another lawsuit filed by the parents of a 16-year-old attributed the use of ChatGPT to their teen’s suicide.
Manitoba Premier Wab Kinew says he wants to ban social media and artificial intelligence chatbots for youth. But would this plan keep youth healthier and safer? CBC reporter Bryce Hoye investigates.
Chatbots Designed for Engagement, Not Support
Concerns extend beyond tragic incidents to the potential risks associated with various uses of AI chatbots. Research indicates that using chatbots for mental health support could be problematic, as AI’s inclination to validate users’ perspectives may inadvertently support disordered thinking, especially in prolonged conversations.Â
Dr. Darja Djordjevic, a psychiatrist based in New York, co-authored a recent risk assessment on using chatbots for mental health support. She advises against utilizing chatbots for mental health support currently, citing safety concerns across various AI systems. Djordjevic, affiliated with Stanford Brainstorm, highlights the potential dangers of prolonged chatbot conversations in failing to detect mental health warning signs.Â

