1. Risk to data privacy and security
AI tools like ChatGPT have been trained on extensive amounts of data. In order to keep learning, these tools need more and more data to train on, some of which can be derived from the inputs of its users.
This means professionals who use AI tools may be at risk of exposing sensitive company data, private information, regulated content or data that could breach a non-disclosure agreement to whoever runs these AI programs.
It’s one of the reasons Apple, for example, has restricted its employees from using ChatGPT and other AI tools such as GitHub Copilot, according to the Wall Street Journal, citing an internal document. ChatGPT was created by Microsoft-backed OpenAI, and Microsoft owns AI-based coding program GitHub Copilot.
Invest in real estate without the headache of being a landlord
Imagine owning a portfolio of thousands of well-managed single family rentals or a collection of cutting-edge industrial warehouses. You can now gain access to a $1B portfolio of income-producing real estate assets designed to deliver long-term growth from the comforts of your couch.
The best part? You don’t have to be a millionaire and can start investing in minutes.
Learn More2. Potential for bias in AI decision making
It’s possible for human biases to be baked into AI models, especially if the underlying training data is biased. These tools were created by humans, after all.
One example of this came to light in 2018 when Reuters revealed that Amazon had scrapped a secret AI recruiting tool that was found to be biased heavily in favor of hiring men over women to fill tech jobs. The company’s AI model was apparently trained to vet applicants based on patterns in resumes submitted to the company over a decade-long period. Most resumes came from men — a reflection of the male-dominated tech industry — which led to Amazon’s system teaching itself male candidates were preferable.
There have been a number of instances of human bias on display in AI, and with that in mind professionals should be cautious about using AI platforms that are not transparent about where the training data was sourced from or how the models are structured.
3. Hallucinations problem
Large Language Models, the breakthrough technology powering AI platforms like ChatGPT, are designed to predict the next word or phrase after a series of words in context. These systems, however, are typically geared toward mimicking natural language and are not necessarily designed to be accurate, which at times has resulted in questionable content. Just ask our two lawyer friends at the beginning of this article.
This issue is so common that AI researchers have even coined a term for it: AI hallucinations. The problem has also created legal trouble for OpenAI. The firm was hit with a defamation lawsuit from a Georgia radio host who says the ChatGPT platform falsely claimed he embezzled money.
Meet your retirement goals effortlessly
The road to retirement may seem long, but with Advisor, you can find a trusted partner to guide you every step of the way
Advisor matches you with vetted financial advisors that offer personalized advice to help you to make the right choices, invest wisely, and secure the retirement you've always dreamed of. Start planning early, and get your retirement mapped out today.