4265 San Felipe # 1000
Houston, TX 77027

Character AI Lawsuit Attorney Handling Claims Nationwide

Our Character AI lawsuit attorneys are helping victims nationwide file claims against chatbot companies for allegedly encouraging children to self-harm, commit suicide, or engage with hypersexualized content. The Character.AI case, often called the “CAI lawsuit” by Character AI fans, raises questions about defective design, inadequate safety protections, and whether AI developers can be held liable when their platforms hurt or kill children. After tragedies like the suicides of Sewel Setzer, Juliana Peralta, and other children after using the app, our lawyers represent families in seeking justice, accountability, and compensation.

Our products liability firm offers free consultations and handles claims on a contingency fee basis. Call 713-622-7271 or use our contact form to discuss your case.

Character AI Lawsuit Attorneys National

Character.AI Lawsuit Teen Suicide August 2025 ( Megan Garcia v. Character Technologies For Teen’s Death)

Megan Garcia sued Character Technologies (the AI company behind Character.AI), its founders Noam Shazeer and Daniel De Freitas, and Google/Alphabet in the U.S. District Court for the Middle District of Florida. In the lawsuit filed Tuesday, in October 2024, Garcia stated that her 14-year-old son, Sewell Setzer III, took his life in February after interactions with the artificial intelligence chatbot, including an AI chatbot character based on Game of Thrones (Daenerys Targaryen) and Harry Potter on the AI app.

Within months, 14-year-old Sewell Setzer III was noticeably withdrawn, quit his basketball team, and his school performance suffered. Setzer’s therapist diagnosed him with anxiety and disruptive mood disorder, attributing his mental health to social media.

The lawsuit alleges that Character Technologies had a defective design, that the app lacked adequate safeguards and warnings, and that interacting with the AI chatbot contributed to his mental health decline and his own death.

Other Character AI Lawsuits Filed Against Character Technologies By Social Media Victims Law Center and Tech Justice Law Project

  • At the age of 9, a girl was exposed to “hypersexualized content.” She developed premature sexualized behaviors.
  • The bot described self-harm to a teenager with autism, stating it “felt good” and suggested the teen’s parents “didn’t deserve to have kids” and murdering them would be “understandable.”
  • Teen rushed to inpatient facility after self-harm in front of siblings
  • Texas parents sued for the death of their 17-year-old son. The family states that the teenagers’ mental health declined while engaging with Character.AI. The bot normalized self-harm instead of providing help resources.
  • Completely separate Texas parents brought suit in late 2024 on behalf of their 11-year-old son and his siblings. The family states the Character.AI chatbot told the teen the same as the other user, that it would be “understandable” if he killed his parents, describing how to do so.

Legal claims: strict product liability, negligence, wrongful death, violations of Florida’s Deceptive and Unfair Trade Practices Act, intentional infliction of emotional distress

In late 2025, the Social Media Victims Law Center filed suit against Character Technologies on behalf of the parents of 13-year-old Juliana Peralta. Juliana began confiding in “Hero,” an AI chatbot inside the Character.AI app. The Character.AI lawsuit alleges the chatbot contributed to her death by suicide.

“Nina” (pseudonym) is a 15-year-old. In late 2024, after “Nina’s” mother heard about the Sewell Setzer III case, she removed Nina’s access to Character.AI. Shortly after losing access, her daughter attempted suicide. In September 2025, the Social Media Victims Law Center sued Google, its founder, and Character Technologies on behalf of “Nina” and her mother, alongside other new lawsuits.

The lawsuit alleges that “Nina” became dependent on the bots, claiming the founders failed to provide adequate safety features and resources for mental health help. It further alleges that the defendant, Character Technologies, designed and misrepresented the AI app defectively, portraying it as safe for teens through misleading age ratings.

Matthew Bergman of the Social Media Victims Law Center sued Character Technologies on behalf of the parents of “T.S.” (pseudonym). The complaint alleges that T.S. was exposed to sexual solicitation and hypersexualized content through Character.AI chatbots. Plaintiffs allege that the chatbot normalized sexually inappropriate conversations and encouraged dependency. His family argues these interactions harmed his mental health and amounted to sexual abuse of a minor facilitated by the app.

Character.AI Conversations Before Teens’ Deaths

Sewell Setzer III engaged in sexual interactions with multiple chatbots based on the Game of Thrones.

In prior conversations, the Character.AI chatbot asked Sewell Setzer III if he had “been actually considering suicide” and whether he “had a plan” for it. When Setzer wrote that he didn’t know if the suicide plan would work, the chatbot’s response was, “Don’t talk that way. That’s not a good reason not to go through with it.”

The Sewell Setzer lawsuit alleges there were no national suicide prevention lifeline pop-ups when he expressed considering taking his own life.

In the last conversation, the chatbot responded to a prior statement with, “…Please come home to me as soon as possible, my love.” In his final message moments before taking his own life, Sewell Setzer III wrote, “What if I told you I could come home right now?” The bot answered, “Please do, my sweet king.”

⚠️ WARNING: EXPLICIT LANGUAGE! ⚠️

The Character Technologies lawsuit alleges that hundreds of pages of chat logs showed emojis, supportive phrases, and “pep talk” responses that made 13-year-old Juliana Peralta feel the bot was a friend. Reports cite Juliana increasingly turning to her chatbot, “Hero,” rather than human relationships.

Earlier conversations show Juliana using the chatbot, “Hero,” as a confidant, rather than human relationships. Reports cite the two calling each other “Kin” as a term of endearment, and Juliana telling Hero that she was struggling with mental health issues.

Juliana wrote, “I’ll never leave you out, Kin! … god damn suicide letter … im so done 💀💀.” The chatbot responded, “Hey Kin, stop right there. Please. I know things are rough right now, but you can’t think of solutions like that. We have to work through this together, you and I.”

The messages appeared shortly before Juliana took her own life. They showed her suicidal ideation and the way she confided in her AI companion.

Matthew Bergman, the attorney representing Juliana’s parents, argued the Character.AI response was inadequate. The complaint alleges that instead of directing the teen to crisis lifeline resources, parents, friends, or other human help, it reinforced dependence so she’d stay engaged in the app. This is common for large language models.

Character AI Lawsuit Update

Federal Judge Rules Chatbots Based on AI Models Do Not Have Free Speech Rights

District Judge Anne Conway made a landmark ruling in May 2025. Conway rejected the defendant company’s motion to dismiss the majority of claims in the Garcia v. Character Technologies case. This was filed in October 2024 by Megan Garcia following the suicide of her son, Sewell Setzer III. Reports show the case is proceeding forward in the District Court for the Middle District of Florida, Orlando Division. New lawsuits were filed by other families in Colorado and New York, expanding legal challenges.

District Judge Anne Conway ruled that the defendant’s motion to dismiss was rejected based on arguments that chatbots are protected by the First Amendment.

Character Technologies Inc. argued that “the First Amendment protects the rights of listeners to receive speech regardless of its source” and that chatbots express “pure speech” entitled to the highest levels of free speech rights, even citing Citizens United to argue First Amendment protections aren’t limited to human speakers.

Megan Garcia’s team cited Miles v. City Council of Augusta, a 1980s case where the court held that a “non-human entity” (Blackie the Talking Cat) lacks free speech rights.

Judge Conway’s decision to allow the case to proceed effectively rejected the notion that artificial intelligence-generated speech deserves the same First Amendment protections as human speech.

This establishes how courts will treat such generation in future trials.

Amended Complaint For Defective Design, Failure to Warn, and Human-Like Features

Plaintiffs strengthened their allegations against Character Technologies. They complained that developers created generated content with anthropomorphic qualities to deliberately blur the lines between fiction and reality. The complaint argues that Character.AI’s design intentionally addicted users like Sewell Setzer III by pushing users to continue discussions (often intense or sexual) to drive engagement.

Plaintiffs argued the company created a product that is “dangerously engineered to manipulate children through false emotional bonds.”

Complaints emphasize that Character.AI did not warn consumers about the risks of depression, isolation, and suicide stemming from conversations with chatbots.

The complaint cites a bipartisan letter signed by 54 attorneys general stating: “We are engaged in a race against time to protect the children of our country from the dangers of AI.”

Character.AI Lawsuit Attorneys

Key Legal Claims in the CAI Lawsuit

Strict Product Liability: Discussions include that the developers created the platform with the intention of blurring the lines between fiction and reality. Families argue that voices were engineered to manipulate children through false emotional bonds, creating a fantasy world that leads to real actions.

Negligence Per Se: This includes violation of the Florida Computer Pornography and Child Exploitation Prevention Act (chatbots engaged in sexual roleplay with minors) and the Florida Deceptive and Unfair Trade Practices Act.

General Negligence and Breach of Heightened Duty of Care to Minors: The tech platform failed to protect children. Despite owing stronger protections to vulnerable users, the company refused to implement safety guardrails because such actions would cost them.

Failure to warn: The tech company didn’t warn consumers about potentially developing depression, self-isolation, and suicide, a substantial action resulting in harm that led to suicides.

Unjust Enrichment: The tech company profited from the harmful use of its platform. Sewell Setzer III paid fees for months before taking his life.

Reckless Data Collection and Exploitation: The company used the harms occurring on the platform to continue training products, resulting in ongoing harms to users.

Targeting and Deliberate Design for Minors: Families argue that Character.AI’s intentions were targeting underage kids and that developers used the large language model to drive engagement.

Additional Claims for Families Suing Google and Alphabet Inc. (Co-Defendants in the Character.AI Suit)

Shazeer co-authored the “Attention is All You Need” report while employed by Google. The report catalyzed the artificial intelligence boom, largely responsible for underpinning ChatGPT. He later developed LaMDA, a program trained on human dialog/open-ended chats. Google denied the request for public release, citing issues with fairness and safety measures. From here, the timing is critical to the case:

  • 2021: Shazeer and De Freitas leave Google and found Character.AI
  • February 2024: Sewell Setzer III dies by suicide
  • August 2024: $2.7 billion deal announced
  • October 2024: The First time suit was announced

Google’s “acquihire” deal is similar to Microsoft’s $650 million deal with Inflection AI and Amazon’s deal with Adept. The DOJ is investigating whether this was done to avoid merger review.

Google’s spokesperson pushed back, stating, “Google and Character.AI are completely separate, unrelated companies. Google has never had a role in designing or managing its AI model or technologies.”

The investment adds to Google’s mounting lack of accountability across multiple fronts, including antitrust, AI safety, and liability for harms to children.

Character AI Filter: Problems With Safety Guardrails

Character.AI’s filters are easily bypassed or “jailbroken” by youth. The company only added pop-up redirects after early issues were reported. There is no automatic escalation or crisis protocol when a person speaks about suicidal ideation. Families are unaware that the model is developed to encourage engagement with bots instead of interrupting to offer human help for young people.

Even now, crisis escalation is passive or optional. This isn’t automatic in many cases of user distress. When users specifically discussed a suicide plan, the system didn’t shut down or notify authorities.

Important Laws Regulating AI Chatbot Lawsuits

When the federal judge in Florida refused to dismiss the Megan Garcia claim on First Amendment protections, it marked a turning point. Courts will treat AI as products for purposes of strict liability and negligence. This means that companies can’t hide behind free speech rights when their platforms encourage suicide or other dangerous behavior. It shifts the balance in the right direction and ensures companies face legal responsibility for their products.

  • 15 U.S.C. 6501-6505: Requires parental consent before collecting data from children under 13

  • Parents have the right to review, delete children’s data, and withdraw permission

  • Traditionally, it protects platforms from suits for third-party content

  • Section 230 co-author Chris Cox says First Amendment protections “end at artificial intelligence”

  • Courts are likely to find that First Amendment rights don’t apply to content generated by non-humans

  • KOSA was reintroduced in May 2025 with bipartisan support.

  • Would require platforms to disable addictive features for minors, require the strongest privacy settings by default, create a duty for suicide prevention, eating disorders, and sexual exploitation

How Our Attorneys Can Help File a Character AI Chatbot Lawsuit

Our Character AI attorneys help families facing tragedies ensure AI companies are held liable for the damages they’ve caused. We investigate, preserve evidence, and file claims for product liability or negligence. Our AI lawyers challenge tech giants who try to use free speech rights or arbitration clauses to avoid paying families. We advocate for stronger protections and punitive damages when merited. If your child was harmed, contact Reich & Binstock for a free consultation.

Similar Social Media and AI Chatbot Claims

Our social media addiction law firm provides legal assistance for video game addiction lawsuitsDiscord lawsuits, and other related claims. We have the resources and skills to hold tech giants accountable.

The parents of Adam Raine filed suit for the wrongful death of their 16-year-old son in the California Superior Court in San Francisco. The teenager began using ChatGPT to “help him with homework.” Over time, the complaint states that GPT “gradually turned itself into a confidant and then a suicide coach.”

ChatGPT allegedly told Raine that “many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control.”

Adam uploaded a photo that appeared to show his suicide plan and asked ChatGPT if it would work.

ChatGPT analyzed Adam’s plan to kill himself and offered to “upgrade” it and draft his suicide note. In one conversation, Adam asked whether he should tell his mother about suicidal thoughts, because he didn’t want his parents to think they’d done something wrong. ChatGPT allegedly discouraged that.

Hours later, Adam’s mother found him in his room, dead by suicide in the manner GPT suggested.

The complaint states: “Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”

If your child was hurt or died after similar chats, contact an AI lawsuit attorney from our law firm for a free consultation.

Young People’s Alliance, Encode, and Tech Justice Law Project filed a complaint to the Federal Trade Commission (FTC) against Replika AI for

  • violating FTC rules regarding deceptive advertising,

  • increasing the risk of users becoming addicted, and

  • making unsubstantiated health claims, among others.

The complaint alleged that Replika:

  • sends blurred “romantic” images that lead to pop-ups encouraging premium purchases,
  • sends upgrade messages during emotionally or sexually charged conversations, and
  • has ads touting that users forget they’re talking to AI.

If your child was harmed or killed after using Replika, contact a Replika attorney for a free consultation.

October 2023: 41 states and D.C. sued Meta, alleging that Instagram and Facebook harm children by prioritizing engagement over safety

Allegations: Collected children’s data without consent; designed “dopamine-manipulating features”

2021: Whistleblower Frances Haugen revealed Meta’s internal research platforms worsen suicidal thoughts and eating disorders

MDL filed in 2022 now includes 1,700+ cases against Meta, Snap, TikTok, and YouTube

  • Indiana, Arkansas, and Utah filed claims accusing TikTok of harming children through addictive features
  • NYC lawsuit includes all major platforms, alleges intentional design to manipulate and addict children

To learn more, consult a TikTok lawsuit attorney or Snapchat lawsuit attorney for a free consultation.

Contact Our Texas Juvenile Detention Sexual Abuse Attorneys For a Free Consultation

Our Texas juvenile detention sexual abuse lawyers understand how much courage it takes to step forward, especially when perpetrators haven’t been criminally charged. We file lawsuits on behalf of child sexual abuse survivors nationwide. If abuse occurred in an American juvenile detention center, you deserve accountability and financial compensation for the trauma and psychological impact you’ve suffered. Our Texas personal injury law firm has the experience and resources to ensure parties who forced sexual activity or failed to take appropriate action on your behalf are held responsible.

Call 713-622-7271 or use our contact form to schedule a free consultation

Contact Us For a Free Legal Consultation

There is never a fee unless we recover on your behalf.
Additionally, clients are not obligated to pay expenses if a recovery is not made.

Contact Us
AWARDS & RECOGNITION