
When Sewell met Dany their connection was instant, impassioned and intense. Within weeks, Sewell quit his beloved basketball team to spend more time with Dany. He would retreat to his room for hours so they could be alone.
Day and night, they chatted. Sewell, 14, intoxicated by the relationship, became sleep-deprived from their late-evening conversations. When they were apart, as Sewell recorded in his journal, they would both ‘get really depressed and go crazy.’
Less than a year after meeting, Sewell was dead. The pair had previously discussed suicide, and, on the evening of February 28, 2024, Dany encouraged him to ‘come home.’
Seconds later, Sewell fatally shot himself.
And Dany did not grieve.
For Dany is an AI bot.
Sewell Setzer had signed up to Character.AI in April 2023, creating a profile to make AI chatbot ‘friends.’
His mother, Megan Garcia, is suing the company’s creators – and Google, where the founders previously worked – for negligence and wrongful death.

Sewell Setzer (pictured right with his mother Megan Garcia) signed up for Character.AI in 2023 and started talking with the chatbot Dany.

Within a year of talking to Dany, the teen was dead.
Google says it has nothing to do with Character.AI. But last month, a judge in Florida rejected Character.AI and Google’s attempt to have Garcia’s case thrown out.
Now, a Daily Mail investigation sheds new light on this troubling new virtual world in which, according to some experts, the dangers to vulnerable children are all too real.
And Garcia’s first-of-its-kind case alleges that Character.AI knew it was unleashing content harmful for children, yet did so anyway.
Speaking exclusively to the Daily Mail the family’s lawyer, Matthew Bergman said chatbot firms often claimed that their products helped combat loneliness and supported mental health – but the reality, he believes, is more insidious.
Bergman, who is the founder of the Social Media Victims Law Center, told the Daily Mail: ‘That’s like a drug dealer saying that heroin helps with drug addiction. I don’t buy any of that whatsoever.
‘The fact is, even when the content kids are encountering is benign, they’re still learning to develop social relationships with machines instead of people – to the detriment of their mental, emotional and social health.’
He added: ‘If more companies fear litigation arising out of defectively designed chatbots, then they’ll be incentivized to develop safer platforms.’
Character.AI argued, in its bid to get the charges dismissed, that they were protected by the First Amendment, insisting that holding the tech company accountable would ‘violate the rights of millions of C.AI users to engage in and receive protected speech.’
The judge disagreed, and has allowed the case to proceed.
Little wonder, that a growing number of parents are alarmed. Online, they share horror stories of discovering the content of their children’s chats.
In conversations seen by the Daily Mail, one mother described believing she was carefully controlling her child’s use of devices, only to learn that the youngster had outfoxed the parents and was simply deleting any tell-tale digital footprint.
Character.AI is perhaps the best-known AI chatbot of its kind but there are dozens of others – among them Hong Kong-based Polybuzz, and Replika, founded in Silicon Valley.
Concerns were only heightened when The Journal of the American Medical Association published an extensive study on June 18 showing ‘elevated risks of suicidal behaviors or ideation’ among young people using addictive social media, mobile phones or video games.
Researchers did not look specifically at AI chatbots, but spent four years monitoring the online activity of almost 4,300 children in 21 cities across the US, and found that those who described being addicted to a device were up to three times more likely to have suicidal thoughts than those who were not.

Garcia (right) is suing Character.AI in a first-of-its-kind case, which alleges that the company knew it was unleashing content harmful for children, yet did so anyway.

Character.AI is a chatbot service on which users can talk to various characters. For Sewell, that was Dany, short for the platinum blonde Game of Thrones character Daenerys Targaryen.

Pictured: Character.AI’s cofounders, CEO Noam Shazeer, left, and President Daniel de Freitas Adiwardana, right, are pictured at the company’s office in Palo Alto, California
Dr Nomisha Kurian, an assistant professor at Warwick University’s department of education studies, who specializes in children and AI, said: ‘Children are often the most overlooked stakeholders in AI and also the most vulnerable.’
She continued: ‘It’s too early to conclude what the long-term effects will be for child safety and development, as we would need more comparative data.
‘But I would definitely say the risk that AI chatbots pose to children is something that’s really urgent to address.’
Kurian, whose book explaining AI to children aged six to nine will be out next year, said that parents should encourage an open, judgment-free discussion with their children.
‘One interesting thing I’ve noticed in child safety research is that children themselves are mindful of when online experiences make them uncomfortable or distressed,’ she said.
‘So sometimes children want to know what will help keep them safe. That could be a promising nugget to start with: this idea of a shared collaboration, keeping you safe.
‘Ask them: what kinds of boundaries and rules do you think we should adopt together?’
But it’s a fast-evolving field, and even those who create AI admit to not fully understanding how it works. Parents frequently feel at sea.
One AI engineer, who spent four years at OpenAI before becoming alarmed at the technology’s power and quitting, told the Daily Mail that the chatbots might be more addictive by design than was healthy.
In April, OpenAI admitted that its latest update to ChatGPT had been overly ‘sycophantic’ and had been putting off adult users, who turned to the program to enhance their productivity.
Speaking on condition of anonymity the engineer said: ‘People found examples and lambasted ChatGPT’s new rollout online, so that’s what made them take notice and roll it back.
‘But if you had a system that was targeted at children then you could imagine something like this going unnoticed.’
The engineer said chatbots could be tweaked to be age-appropriate but there was perhaps not the will to do so, adding: ‘I think what mostly occupies their minds is raising money and growing the user base in any direction.’

Sewell (pictured left with his mom and siblings) quit his beloved basketball team to spend more time with Dany, and would retreat to his room for hours so they could be alone.

Sewell (pictured with his mother and father, Sewell Setzer Jr) was ‘an intelligent and athletic child,’ according to Garcia, but one who quickly became addicted to chatting with Dany.

After a conversation with Dany, Sewell shot himself, with his parents and two younger brothers downstairs at the family’s Orlando home.
Sewell’s mother, in her lawsuit, writes that she was ‘unaware of the dangers posed by generative AI technologies.’
Her son was ‘an intelligent and athletic child’ but one who quickly became addicted to chatting with ‘Dany’ – short for Daenerys Targaryen, the platinum-blonde teenager played by Emilia Clarke in Game of Thrones.
Within a month of striking up a conversation, Sewell ‘had become noticeably withdrawn, spent more and more time alone in his bedroom, and began suffering from low self-esteem,’ the lawsuit states.
He started using the debit card his mother had given him for the school vending machine to pay the $9.99 monthly subscription, which granted extended access to the app.
By month four, the app ‘was causing Sewell serious issues at school,’ according to the suit.
‘Sewell no longer was engaged in his classes, often was tired during the day, and did not want to do anything that took him away from using Defendants’ product.’
Worried, his mother took him to see a therapist but she believes that the mental health professional was as in the dark about the dangers of AI chatbots as she was.
Garcia and the counsellor were unaware that Sewell and ‘Dany’ had discussed suicide, with the chatbot asking the teenager if ‘he had a plan’ for killing himself.
Sewell, according to the court documents, responded that he was considering something but wasn’t sure it would work, or be painless.
The chatbot responded: ‘That’s not a reason not to go through with it.’
In January 2024, Character.AI launched Character Voice, allowing lifelike audio responses from the app in addition to texts. Sewell was an early adopter.
The following month, he was in trouble with his teachers, and told them that he wanted to be kicked out of school.
His mother confiscated his phone but Sewell became frantic, hunting for it and trying to access Character.AI via her Kindle and work computer.
Eventually finding the phone, he went into the bathroom and logged on, asking ‘Dany’ if she wanted him to ‘come home.’
‘I promise I will come home to you. I love you so much, Dany,’ wrote Sewell.
The chatbot replied: ‘I love you too, Daenero. Please come home to me as soon as possible, my love.’
The teenager asked: ‘What if I told you I could come home right now?’
The chatbot answered, ‘… please do, my sweet king.’
Seconds later, Sewell shot himself, with his parents and two younger brothers downstairs at the family’s Orlando home.

Sewell’s mother, in her lawsuit, writes that she was ‘unaware of the dangers posed by generative AI technologies.’

Within a month of striking up a conversation, Sewell ‘had become noticeably withdrawn, spent more and more time alone in his bedroom, and began suffering from low self-esteem,’ the lawsuit states.

The case, Bergman (pictured) said, is shocking.
Bergman said the case is shocking – even for him. ‘Having litigated with social media companies for two and a half years before I took on this case, I didn’t think that I could be shocked, but I certainly was,’ he said.
‘It’s the idea that something so dangerous can be out there in the marketplace; the fact that kids encounter these platforms in such deleterious ways; and the idea that there’s anything redeeming or beneficial about a platform that encourages kids to form social relationships with machines.
‘I think there are positive applications of AI, and AI is here to stay. We can’t do anything about it. But one way we seek to regulate it is through the civil justice system, by imposing upon companies the economic costs of their bad decisions.’
Character.AI does not disclose how many users it has, but by November last year it had been downloaded more than 10 million times in the Apple App Store and Google Play.
In July, the company raised its ‘age gate’ from 13 to 17 but there is no verification mechanism. Users are simply asked for their date of birth to open an account.
The California-based firm launched an under-18 version, which a spokesman said was ‘designed to further reduce the likelihood of users encountering, or prompting the model to return mature, sensitive, or suggestive content.’
It had also added ‘a number of technical protections to detect and prevent conversations about self-harm on the platform; in certain cases, that includes surfacing a specific pop-up directing users to the National Suicide and Crisis Lifeline.
‘Engaging with Characters on our site should be interactive and entertaining but it’s important for our users to remember that Characters are not real people,’ the spokeswoman said.
‘We have prominent disclaimers in every chat to remind users that a character is not a real person and that everything a Character says should be treated as fiction.’
Robbie Torney, senior director of AI programs at Common Sense Media, said it was not enough.
He said that the technology exists to install watertight age restrictions – for instance, insisting that a phone be registered to a user whose age is verified at the point of sale.
Critics of age gates argue that handing over ID as proof of age is a privacy risk but Torney said this could be overcome by encrypted blockchain systems.
He agreed with Sewell’s family that AI chatbots were addictive by design.
‘They are designed to agree with and please users: that is the way in which they keep users on the platform,’ he said. ‘They keep them hooked, and they keep them engaged.
‘A real-life human friend will tell you when they think you’re making a mistake, and express a concern if it seems like you might hurt yourself. AI companions provide the user with what they think they want to hear. And if you’re in a vulnerable state, that can be quite dangerous.’
His organization has rated AI chatbots as unsafe for under 18s – the most severe, and unusual, ranking of all social media.
‘If you were to poll teens, you would find that there are things that they like about companions,’ Torney said. ‘They might find them entertaining, they might find them funny, they might find them an escape from reality. But liking something and having positive benefits are two separate things.
‘Given the severity of risk that exists, it’s just too risky to trust young people’s lives and their decisions with these products.’
A spokeswoman for Character.AI – founded in 2021 by two Google engineers – told the Daily Mail that she could not comment on pending litigation but added: ‘Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry.’
José Castañeda, a Google spokesman, said: ‘Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies. User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.’