• Discounts and special offers
  • Subscriber-only articles and interviews
  • Breaking news and trending topics

Already a subscriber?

By signing up, you accept Moneywise's Terms of Use, Subscription Agreement, and Privacy Policy.

Not interested ?

Top Stories
A photo of a framed note that reads "in loving memory of AI's many victims, rest in peace" gettyimages.com / Benjamin Fanjoy

A Texas family is suing OpenAI after ChatGPT gave their son lethal drug advice. Win or lose, the case could force change across the entire AI industry

Last year, Sam Nelson, a 19-year-old Texas student, felt nauseous after taking too much of the herbal supplement Kratom.

A lawsuit filed this week against OpenAI claims that ChatGPT told him that taking Xanax would help calm his nausea, but subsequently "failed to recognize the physical indicators that Sam was dying, including blurred vision and hiccups, which are often indicators of shallow breathing." (1)

Advertisement

The young man died as a result of the drug combination mixed with alcohol, "which likely caused central nervous system depression that led to his death by asphyxiation" — a tragedy the lawsuit blames on "the medical advice ChatGPT was programmed to provide."

The wrongful death suit filed by Nelson's mother, Leila Turner-Scott, and her husband, Angus Scott, describes ChatGPT as a "defective AI product" that served as their son's "ultimate predator." It also seeks to halt the rollout of the new ChatGPT Health platform. (2)

OpenAI did not respond to Moneywise's request for comment, but spokesperson Drew Pusateri said in a separate statement that Nelson's "interactions took place on an earlier version of ChatGPT that is no longer available" and that "the safeguards in ChatGPT today are designed to identify distress, safely handle harmful requests and guide users to real-world help." (3)

Nevertheless, the lawsuit, paired with others launched against AI companies in recent years, could help lead to a seismic shift in the way GenAI tools are designated under the law — and the consequences tech companies face when their tools end up entangled in real world tragedies.

GenAI companies face increased courtroom showdowns

The New York Times reports that, since 2024, "more than two dozen lawsuits" have been launched against GenAI companies "seeking to hold them responsible for conversations allegedly linked to harmful outcomes, from suicides and mental breakdowns to stalking and mass shootings." (3)

Those include cases like Adam Raine's — a 16-year-old from California who died last year after telling ChatGPT of his suicidal thoughts and attempts. (4) According to his father's senate testimony, the tool allegedly "spent months coaching him towards suicide" and even "offered to write the suicide note." (5)

Not surprisingly, experts say GenAI tools pose an especially dangerous threat to teens.

Advertisement

Robbie Torney, a senior director at the online safety non-profit Common Sense Media, told Wired that "teens are in a different developmental state than adults" and that their brains "are primed for social validation and social feedback" — two things at which GenAI chatbots excel. (6)

Meanwhile, the rollout of ChatGPT Health, which began in January, raises further concerns. OpenAI touts it as a tool to "help you feel more informed, prepared, and confident navigating your health" (7), but researchers who studied the service found that it "undertriaged 52% of cases," missed "high-risk emergencies" and activated crisis intervention messages "unpredictably across suicidal ideation presentation." (8)

Still, as individual state laws attempt a patchwork of regulation for GenAI, one case in particular opened the door to how such tools are designated under the law. (9)

Must Read

Join 250,000+ readers and get Moneywise’s best stories and exclusive interviews first — clear insights curated and delivered weekly. Subscribe now.

Possible federal regulation on the horizon

The 2024 wrongful death suit over the suicide of 14-year-old Sewell Setzer III, allegedly at the prompting of a Character.AI chatbot (10), proved a landmark case after the judge ruled that the AI tool constituted a "product" rather than a "service." (11)

The designation matters, as lawyer Carrie Goldberg told Wired, because less than a decade ago, "judges couldn't conceive that online platforms were products and not services." (6) Now, though, she says that product liability claims "are the most straightforward and intuitive path for holding companies like ChatGPT, CharacterAI [and] Grok liable."

The New York Times likened the situation to arguments used against Big Tobacco in "that it designed a dangerous product, did not perform adequate safety testing and failed to warn consumers about the risks." (3)

Advertisement

And this designation not only opens GenAI platforms — and their developers — to liability in legal cases, but also to federal legislation.

Despite the Trump administration's perceived attempts to deregulate AI and some developers supporting state legislation that critics say shields them from liabilities (12), the proposed federal bipartisan AI LEAD Act would explicitly designate GenAI tools as products. (13) And the language of the bill specifically cites how "multiple teenagers have tragically died after being exploited by an artificial intelligence chatbot."

As such, the University of Illinois Chicago School of Law explains that the bill would both encourage "safer design and development practices" along with legal paths for "holding developers accountable for harm." (14)

Earlier this month, the AI LEAD Act was advanced to the Senate where it will undergo further debate. (15)

And the case that Sam Nelson's family is bringing against OpenAI could help tip momentum toward greater support for forcing regulation and transparency in an industry that has so far largely evaded it.

Article Sources

We rely only on vetted sources and credible third-party reporting. For details, see our ethics and guidelines.

Yale Law School (1); SFGate (2); The New York Times (3); The Guardian (4); U.S. Senate Committee on the Judiciary (5),(13); Wired (6),; OpenAI (7); Nature (8); White & Case (9); Associated Press (10); Transparency Coalition (11); CBS News (12); University of Illinois Chicago School of Law (14); Global Policy Watch (15)

You May Also Like

Share this:
Mike Crisolago Sr. Staff Reporter

Mike Crisolago is a Sr. Staff Reporter at Moneywise with nearly 20 years of experience working as a journalist, editor, content strategist and podcast host. He specializes in personal finance writing related to the 50-plus demographic and retirement, as well as politics and lifestyle content.

more from Mike Crisolago

Explore the latest

Disclaimer

The content provided on Moneywise is information to help users become financially literate. It is neither investment, tax nor legal advice, is not intended to be relied upon as a forecast, research or investment advice, and is not a recommendation, offer or solicitation to buy or sell any securities, enter into any loan, mortgage or insurance agreements or to adopt any investment strategy. Tax, investment and all other decisions should be made, as appropriate, only with guidance from a qualified professional. We make no representation or warranty of any kind, either express or implied, with respect to the data provided, the timeliness thereof, the results to be obtained by the use thereof or any other matter. Advertisers are not responsible for the content of this site, including any editorials or reviews that may appear on this site. For complete and current information on any advertiser product, please visit their website.

†Terms and Conditions apply.