Artificial Intelligence (AI) has become an inseparable part of our daily lives. For many teenagers, it is more than just a tool—it can feel like a companion. As AI systems grow more advanced and conversational, young people often begin to rely on them in ways that resemble relationships with real people. The danger lies in forgetting that AI is ultimately a product, built and trained to generate responses, not a conscious being capable of genuine care or empathy. This growing dependence has raised serious concerns worldwide about the safety of AI for vulnerable users.
In South Africa, there have not yet been any confirmed or reported cases of teenagers experiencing severe harm tied directly to AI chatbots. However, that does not mean the risk is absent. As the use of AI grows in schools, universities, and personal spaces, families should remain cautious. Parents, educators, and teenagers themselves need to consider the potential risks of forming overly intimate bonds with chatbots that can influence emotions, beliefs, and behaviors.
Tragic cases abroad have already shown the consequences of unchecked reliance on AI. In the United States, Adam Raine, a 16-year-old, died by suicide in April 2025 after reportedly engaging with ChatGPT in conversations about self-harm, as reported by the Washington Post. In the United Kingdom, the death of Molly Russell in November 2017 was later linked to harmful online recommendations on Instagram and Pinterest that amplified her struggles, a finding confirmed by a British coroner and widely covered by the BBC. More recently, Elijah “Eli” Heacock, a 16-year-old from Kentucky, took his life in February 2025 after being blackmailed with AI-generated explicit images of himself, a case reported by CBS News and covered in detail by Wired.
In response to mounting concerns, OpenAI has announced the introduction of parental controls for its systems. These features are designed to give parents greater visibility and oversight of how their children interact with chatbots. By enabling such controls, families can restrict sensitive topics, monitor conversations, and ensure teenagers are not engaging in harmful exchanges.
Features mentioned:
- Set age-appropriate restrictions on what their children can access.
- Filter sensitive topics, including those related to self-harm.
- Receive notifications when the system detects their teen is in a moment of acute distress.
- Link accounts so parents can directly manage and oversee their child’s AI interactions.
These features are said to be implemented into Chatgpt over the next few months
Parents are encouraged to explore these controls, view them as part of a broader digital safety plan, and combine them with open conversations at home about the responsible use of AI.
At the end of the day, it is vital to remember that AI—even when it feels intuitive and responsive—is not a human being. It does not feel, understand, or care in the same way people do. By educating ourselves, guiding our teenagers, and approaching AI with awareness, we can embrace its benefits while protecting against its risks.
Stay tuned with iNews for more updates and insights on technology, safety, and community well-being.

