• Discounts and special offers
  • Subscriber-only articles and interviews
  • Breaking news and trending topics

Already a subscriber?

By signing up, you accept Moneywise's Terms of Use, Subscription Agreement, and Privacy Policy.

Not interested ?

Top Stories
Court of Justice Trial: Impartial Judge is Sitting, Public Stands. Supreme Federal Court Judge Starts Civil Case Hearing. Sentencing Law Offender. Gorodenkoff / Shutterstock

‘Failed miserably’: Two lawyers who used ChatGPT in a federal court filing that cited ‘bogus’ cases are now facing serious punishment. Here are some big risks of using AI in the workplace

U.S. lawyers Peter LoDuca and Steven A. Schwartz are facing potentially serious consequences after an artificial intelligence tool was used to perform research for a case that turned out to be “bogus.”

A court filing in a lawsuit against Colombian airline Avianca included references to past cases that, as it turns out, didn’t exist. District Judge P. Kevin Castel identified the citations as fabrications and demanded an explanation.

Advertisement

It emerged that Schwartz had used OpenAI’s platform ChatGPT to search for legal precedents to support the case of the pair’s client against the airline.

Schwartz admitted in a court appearance June 8 he had “failed miserably” to do follow-up research to ensure the citations in the court filing were correct.

“I did not comprehend that ChatGPT could fabricate cases,” he said.

LoDuca said he trusted Schwartz and didn’t closely review the filing.

“It never dawned on me that this was a bogus case,” he said.

The incident highlights the risks of AI that may become more apparent as these tools begin seeing further use across workplaces. Here are some of the top AI risks professionals should be aware of.

1. Risk to data privacy and security

AI tools like ChatGPT have been trained on extensive amounts of data. In order to keep learning, these tools need more and more data to train on, some of which can be derived from the inputs of its users.

Advertisement

This means professionals who use AI tools may be at risk of exposing sensitive company data, private information, regulated content or data that could breach a non-disclosure agreement to whoever runs these AI programs.

It’s one of the reasons Apple, for example, has restricted its employees from using ChatGPT and other AI tools such as GitHub Copilot, according to the Wall Street Journal, citing an internal document. ChatGPT was created by Microsoft-backed OpenAI, and Microsoft owns AI-based coding program GitHub Copilot.

Must Read

Join 250,000+ readers and get Moneywise’s best stories and exclusive interviews first — clear insights curated and delivered weekly. Subscribe now.

2. Potential for bias in AI decision making

It’s possible for human biases to be baked into AI models, especially if the underlying training data is biased. These tools were created by humans, after all.

One example of this came to light in 2018 when Reuters revealed that Amazon had scrapped a secret AI recruiting tool that was found to be biased heavily in favor of hiring men over women to fill tech jobs. The company’s AI model was apparently trained to vet applicants based on patterns in resumes submitted to the company over a decade-long period. Most resumes came from men — a reflection of the male-dominated tech industry — which led to Amazon’s system teaching itself male candidates were preferable.

There have been a number of instances of human bias on display in AI, and with that in mind professionals should be cautious about using AI platforms that are not transparent about where the training data was sourced from or how the models are structured.

3. Hallucinations problem

Large Language Models, the breakthrough technology powering AI platforms like ChatGPT, are designed to predict the next word or phrase after a series of words in context. These systems, however, are typically geared toward mimicking natural language and are not necessarily designed to be accurate, which at times has resulted in questionable content. Just ask our two lawyer friends at the beginning of this article.

This issue is so common that AI researchers have even coined a term for it: AI hallucinations. The problem has also created legal trouble for OpenAI. The firm was hit with a defamation lawsuit from a Georgia radio host who says the ChatGPT platform falsely claimed he embezzled money.

You May Also Like

Share this:
Vishesh Raisinghani Freelance Writer

Vishesh Raisinghani is a financial journalist covering personal finance, investing and the global economy. He's also the founder of Sharpe Ascension Inc., a content marketing agency focused on investment firms. His work has appeared in Moneywise, Yahoo Finance!, Motley Fool, Seeking Alpha, Mergers & Acquisitions Magazine and Piggybank.

more from Vishesh Raisinghani

Explore the latest

Disclaimer

The content provided on Moneywise is information to help users become financially literate. It is neither investment, tax nor legal advice, is not intended to be relied upon as a forecast, research or investment advice, and is not a recommendation, offer or solicitation to buy or sell any securities, enter into any loan, mortgage or insurance agreements or to adopt any investment strategy. Tax, investment and all other decisions should be made, as appropriate, only with guidance from a qualified professional. We make no representation or warranty of any kind, either express or implied, with respect to the data provided, the timeliness thereof, the results to be obtained by the use thereof or any other matter. Advertisers are not responsible for the content of this site, including any editorials or reviews that may appear on this site. For complete and current information on any advertiser product, please visit their website.

†Terms and Conditions apply.