Last year, Sam Nelson, a 19-year-old Texas student, felt nauseous after taking too much of the herbal supplement Kratom.
A lawsuit filed this week against OpenAI claims that ChatGPT told him that taking Xanax would help calm his nausea, but subsequently "failed to recognize the physical indicators that Sam was dying, including blurred vision and hiccups, which are often indicators of shallow breathing." (1)
The young man died as a result of the drug combination mixed with alcohol, "which likely caused central nervous system depression that led to his death by asphyxiation" — a tragedy the lawsuit blames on "the medical advice ChatGPT was programmed to provide."
The wrongful death suit filed by Nelson's mother, Leila Turner-Scott, and her husband, Angus Scott, describes ChatGPT as a "defective AI product" that served as their son's "ultimate predator." It also seeks to halt the rollout of the new ChatGPT Health platform. (2)
OpenAI did not respond to Moneywise's request for comment, but spokesperson Drew Pusateri said in a separate statement that Nelson's "interactions took place on an earlier version of ChatGPT that is no longer available" and that "the safeguards in ChatGPT today are designed to identify distress, safely handle harmful requests and guide users to real-world help." (3)
Nevertheless, the lawsuit, paired with others launched against AI companies in recent years, could help lead to a seismic shift in the way GenAI tools are designated under the law — and the consequences tech companies face when their tools end up entangled in real world tragedies.
GenAI companies face increased courtroom showdowns
The New York Times reports that, since 2024, "more than two dozen lawsuits" have been launched against GenAI companies "seeking to hold them responsible for conversations allegedly linked to harmful outcomes, from suicides and mental breakdowns to stalking and mass shootings." (3)
Those include cases like Adam Raine's — a 16-year-old from California who died last year after telling ChatGPT of his suicidal thoughts and attempts. (4) According to his father's senate testimony, the tool allegedly "spent months coaching him towards suicide" and even "offered to write the suicide note." (5)
Not surprisingly, experts say GenAI tools pose an especially dangerous threat to teens.
Robbie Torney, a senior director at the online safety non-profit Common Sense Media, told Wired that "teens are in a different developmental state than adults" and that their brains "are primed for social validation and social feedback" — two things at which GenAI chatbots excel. (6)
Meanwhile, the rollout of ChatGPT Health, which began in January, raises further concerns. OpenAI touts it as a tool to "help you feel more informed, prepared, and confident navigating your health" (7), but researchers who studied the service found that it "undertriaged 52% of cases," missed "high-risk emergencies" and activated crisis intervention messages "unpredictably across suicidal ideation presentation." (8)
Still, as individual state laws attempt a patchwork of regulation for GenAI, one case in particular opened the door to how such tools are designated under the law. (9)
Must Read
- Dave Ramsey warns nearly 50% of Americans are making 1 big Social Security mistake — are you doing the same?
- Thanks to Jeff Bezos, you can now become a landlord for as little as $100 — and no, you don't have to deal with tenants or fix freezers. Here's how
- Robert Kiyosaki says this 1 asset will surge 400% in a year and begs investors not to miss this ‘explosion’
Join 250,000+ readers and get Moneywise’s best stories and exclusive interviews first — clear insights curated and delivered weekly. Subscribe now.
Possible federal regulation on the horizon
The 2024 wrongful death suit over the suicide of 14-year-old Sewell Setzer III, allegedly at the prompting of a Character.AI chatbot (10), proved a landmark case after the judge ruled that the AI tool constituted a "product" rather than a "service." (11)
The designation matters, as lawyer Carrie Goldberg told Wired, because less than a decade ago, "judges couldn't conceive that online platforms were products and not services." (6) Now, though, she says that product liability claims "are the most straightforward and intuitive path for holding companies like ChatGPT, CharacterAI [and] Grok liable."
The New York Times likened the situation to arguments used against Big Tobacco in "that it designed a dangerous product, did not perform adequate safety testing and failed to warn consumers about the risks." (3)
And this designation not only opens GenAI platforms — and their developers — to liability in legal cases, but also to federal legislation.
Despite the Trump administration's perceived attempts to deregulate AI and some developers supporting state legislation that critics say shields them from liabilities (12), the proposed federal bipartisan AI LEAD Act would explicitly designate GenAI tools as products. (13) And the language of the bill specifically cites how "multiple teenagers have tragically died after being exploited by an artificial intelligence chatbot."
As such, the University of Illinois Chicago School of Law explains that the bill would both encourage "safer design and development practices" along with legal paths for "holding developers accountable for harm." (14)
Earlier this month, the AI LEAD Act was advanced to the Senate where it will undergo further debate. (15)
And the case that Sam Nelson's family is bringing against OpenAI could help tip momentum toward greater support for forcing regulation and transparency in an industry that has so far largely evaded it.
Article Sources
We rely only on vetted sources and credible third-party reporting. For details, see our ethics and guidelines.
Yale Law School (1); SFGate (2); The New York Times (3); The Guardian (4); U.S. Senate Committee on the Judiciary (5),(13); Wired (6),; OpenAI (7); Nature (8); White & Case (9); Associated Press (10); Transparency Coalition (11); CBS News (12); University of Illinois Chicago School of Law (14); Global Policy Watch (15)
You May Also Like
- Turning 50 with $0 saved for retirement? Most people don’t realize they’re actually just entering their prime earning decade. Here are 6 ways to catch up fast
- Inside a $1B real estate fund offering access to thousands of income-producing rental properties — with flexible minimums starting at $10
- Vanguard’s outlook on U.S. stocks is raising alarm bells for retirees. Here’s why and how to protect yourself
- Here are 5 easy ways to own multiple properties like Bezos and Beyoncé. You can start with $10 (and no, you don’t have to manage a single thing)
Mike Crisolago is a Sr. Staff Reporter at Moneywise with nearly 20 years of experience working as a journalist, editor, content strategist and podcast host. He specializes in personal finance writing related to the 50-plus demographic and retirement, as well as politics and lifestyle content.
