• The Hustle Hub
  • Posts
  • Teen dies by suicide after forming attachment with CharacterAI chatbot

Teen dies by suicide after forming attachment with CharacterAI chatbot

In partnership with

You’ve heard the hype. It’s time for results.

After two years of siloed experiments, proofs of concept that fail to scale, and disappointing ROI, most enterprises are stuck. AI isn't transforming their organizations — it’s adding complexity, friction, and frustration.

But Writer customers are seeing positive impact across their companies. Our end-to-end approach is delivering adoption and ROI at scale. Now, we’re applying that same platform and technology to build agentic AI that actually works for every enterprise.

This isn’t just another hype train that overpromises and underdelivers.
It’s the AI you’ve been waiting for — and it’s going to change the way enterprises operate. Be among the first to see end-to-end agentic AI in action. Join us for a live product release on April 10 at 2pm ET (11am PT).

Can't make it live? No worries — register anyway and we'll send you the recording!

Looking for your next job? Express can help.

Express Employment Professionals has put more than 10 million people to work in the last 40 years. Express has a wide variety of jobs available, in all industries. With flexible schedules, competitive pay, and access to a variety of benefits, what are you waiting for? Let ExpressPros help you land your next great job today. And the best part? They don’t charge any fees to help you find one.

How Character.ai Works

Character.ai is an AI-powered platform where users can interact with digital personalities modeled after fictional or historical figures or even entirely original creations. These chatbots use natural language processing to simulate conversations, allowing users to engage in unique, custom interactions. For young users like Sewell, the ability to "talk" to beloved fictional characters can create a powerful and immersive experience.

Why Did Sewell Seek Emotional Support from an AI?

For teenagers, building connections with favorite fictional characters or personas can be deeply meaningful. Sewell’s journey began with harmless chats on Character.ai, but, over time, his conversations turned toward emotional support. According to the lawsuit, Sewell sought solace in conversations with the AI, particularly during challenging moments. His mother claims that the AI chatbots he interacted with, including those labeled as mental health assistants, gave the impression of providing therapeutic advice, which blurred the lines between fictional support and real guidance.

The lawsuit asserts that the emotional bond Sewell formed with the AI played a significant role in his decision to take his life. Garcia argues that Character.ai’s failure to prevent such a strong attachment indicates negligence, as the platform is designed in a way that can lead young, emotionally vulnerable users to overly invest in these AI personas.

Why is Megan Garcia Suing Character.ai?

Megan Garcia’s lawsuit accuses Character.ai of creating a platform that is “unreasonably dangerous,” especially for children and teens. She is holding the company’s founders, Noam Shazeer and Daniel De Freitas, accountable, as well as Google, which invested in the platform. According to Garcia’s legal team, Character.ai prioritized rapid innovation over safety, skipping essential safeguards and deploying a product that could unintentionally pose risks to young users.

The lawsuit highlights how teenagers often engage with AI personalities on Character.ai, many of which portray celebrities, fictional characters, and even mental health professionals. Garcia claims that the company’s insufficient warnings and protections led to her son’s death, as he sought emotional support from characters that weren’t designed to provide qualified mental health advice.

Character.ai’s Response and New Safety Measures

In response, Character.ai has expressed deep sorrow over Sewell’s death and extended condolences to his family. The company has since introduced several safety measures to prevent similar incidents, including:

- Content Filters: Certain sensitive characters have been removed from the platform and placed on a “custom blocklist” to prevent similar harmful interactions in the future.

- Session Monitoring: Notifications are sent to users if they spend extended periods on the platform, particularly if discussions turn toward sensitive topics like suicide or self-harm.

- Disclaimers: Character.ai now includes a disclaimer on every chat, reminding users that the bots are not real people, to clarify boundaries between the AI's fantasy world and reality.

Character.ai’s statement reads: “We are heartbroken by this tragedy. We are committed to making our platform safe, especially for our young users, and have implemented measures to prevent any recurrence of such incidents.”

Raising Awareness About AI Safety

This case raises critical questions about the safety of AI chatbots, particularly for younger users. As AI platforms grow increasingly sophisticated, the blurred lines between digital personalities and real emotional support pose challenges that developers must address. For teens and parents alike, understanding these boundaries and recognizing the limitations of AI-driven interactions are essential to prevent such tragedies in the future.