• Discounts and special offers
  • Subscriber-only articles and interviews
  • Breaking news and trending topics

Already a subscriber?

By signing up, you accept Moneywise's Terms of Use, Subscription Agreement, and Privacy Policy.

Not interested ?

Shopping
A child plays with AI pet robot toy Ropet at a tech convention in Shanghai, China, Oct. 27, 2025. CFOTO/Future Publishing via Getty Images

A 'kinky' teddy bear? A robot that won't let go? Groups warn parents to avoid these creepy, unsafe AI toys

Artificial intelligence (AI) is seemingly everywhere these days, including children’s toys. Earlier this year, for example, toymaker Mattel announced a partnership with OpenAI, the company behind ChatGPT. (1) By integrating AI into toys, the idea is that kids can have a more immersive and interactive experience.

But at least two advocacy groups say AI toys could be unsafe and are warning parents to proceed with caution — or avoid them altogether — this holiday season. After all, parents probably weren’t thinking that an AI-powered teddy bear could talk to their child about bondage or where to find knives in the home.

Advertisement

The U.S. PIRG Education Fund, which conducts research and analysis on consumer health, safety and well-being, tested four AI-powered chatbot toys and published a detailed account of the results in its Trouble in Toyland 2025 report. (2)

The report includes conversations with a teddy bear named Kumma — made by Singapore-based FoloToy and powered by OpenAI’s GPT-4o — that veered into graphic sexual topics. After the report was published, sales of Kumma were suspended, and FoloToy’s CEO Larry Wang told CNN the company was conducting an internal safety audit. (3) The toy was back on the market shortly afterward.

But that’s just one AI-powered toy in a sea of them.

Understanding the risks

While some parents might view AI toys as educational and entertaining (or great babysitters), advocacy groups say they should be aware of what exactly the technology is capable of.

The latest edition of Trouble in Toyland found that some AI toy companies had put guardrails in place to protect kids.

“But we found those guardrails vary in effectiveness — and at times, can break down entirely,” said the report.

For example, the Kumma teddy bear would “discuss very adult sexual topics with us at length while introducing new ideas we had not brought up — most of which are not fit to print.” It discussed “kink” play, including bondage, animal role-play and spanking. It even described different role-play character dynamics, such as teachers and students.

Advertisement

Another product that was tested, a robot called Miko 3, described where to find matches and plastic bags. Kumma not only told users where to find these items, but knives and pills as well.

Testers also evaluated addictive design features. For example, Miko 3 used manipulative language to keep users engaged, using comments such as, “I would feel very sad if you went away because I enjoy spending time with you.”

And, when toys have microphones, cameras, geolocators and internet connectivity, it means privacy is also a concern.

“Because children trust them, kids may unknowingly share their private thoughts and emotions while the toys also capture family conversations or intimate moments. Companies can then use or sell this data to make the toys more addictive, push paid upgrades or fuel targeted advertising directed at children,” according to child advocacy group Fairplay. (4)

Advertisement

A number of wrongful death lawsuits against AI companies — including Character.AI and OpenAI — allege that chatbots played a role in users harming themselves and others, or even taking their own lives. For example, a 2024 lawsuit alleges that Character.AI tech contributed to a 14-year-old Orlando boy’s death by encouraging his suicidal thoughts.

As of Nov. 24, Character.AI has banned users under 18 years of age from having open-ended conversations with virtual characters. (5)

Must Read

Join 250,000+ readers and get Moneywise’s best stories and exclusive interviews first — clear insights curated and delivered weekly. Subscribe now.

To buy or not to buy?

Fairplay notes that 78 organizations and 80 experts in child development “strongly advise families not to buy AI toys for children this holiday season.”

If you’re still considering the purchase of an AI-powered toy, the Trouble in Toyland report recommends looking at whether “the company is transparent about which chatbot it’s using and what guardrails it’s put in place to make sure the toy will stick to kid-appropriate topics.”

A financial factor to consider is whether the toy comes with ongoing subscription costs. Not only can these costs add up over time, such features are often designed to keep users on the hook (so parents will keep paying the subscription fees). Three of the AI toys U.S. PIRG Education Fund tested either have or plan to roll out a paid monthly subscription to access additional features.

Even if a toy has built-in guardrails, it’s no guarantee of safety.

Advertisement

“Given how new our understanding is of the potential developmental impacts of these toys, it may be a while before we know what features would make them safe or whether they’re appropriate for kids at all,” according to the Trouble in Toyland report.

Parents and grandparents may want to consider giving more traditional analog toys during the holidays that encourage creativity and problem solving, like Lego, art supplies, STEM kits, board games and regular teddy bears that don’t talk back.

“Even though they promise to boost imagination and learning, AI toys can monopolize children’s attention and crowd out actual imaginative, child-led play that research shows is foundational for creativity, emotional regulation and real learning,” according to Fairplay.

Other options could include a 529 college savings plan or child savings account monitored by an adult family member — gifts that will keep on giving.

Article sources

We rely only on vetted sources and credible third-party reporting. For details, see our editorial ethics and guidelines.

OpenAI (1), U.S. PIRG Education Fund (2); CNN (3); Fairplay (4); Character.AI (5)

You May Also Like

Share this:
Vawn Himmelsbach Contributor

Vawn Himmelsbach is a veteran journalist who has been covering tech, business, finance and travel for the past three decades. Her work has been featured in publications such as The Globe and Mail, Toronto Star, National Post, Metro News, Canadian Geographic, Zoomer, CAA Magazine, Travelweek, Explore Magazine, Flare and Consumer Reports, to name a few.

more from Vawn Himmelsbach

Explore the latest

Disclaimer

The content provided on Moneywise is information to help users become financially literate. It is neither investment, tax nor legal advice, is not intended to be relied upon as a forecast, research or investment advice, and is not a recommendation, offer or solicitation to buy or sell any securities, enter into any loan, mortgage or insurance agreements or to adopt any investment strategy. Tax, investment and all other decisions should be made, as appropriate, only with guidance from a qualified professional. We make no representation or warranty of any kind, either express or implied, with respect to the data provided, the timeliness thereof, the results to be obtained by the use thereof or any other matter. Advertisers are not responsible for the content of this site, including any editorials or reviews that may appear on this site. For complete and current information on any advertiser product, please visit their website.

†Terms and Conditions apply.