What looked like an innocent plush toy stunned researchers with replies meant for no child’s ears.
The incident involving FoloToy’s “Kumma” bear spread quickly because the replies came from a toy marketed for young children. Consumer groups warned that this was not a small glitch. It exposed a growing risk as AI-powered toys reach the market faster than proper oversight.
Researchers found that Kumma, which used an advanced language model, provided instructions involving harmful objects and inappropriate topics. FoloToy briefly removed the product and announced a safety audit. However, the company placed the toy back on sale days later, claiming it strengthened its filters and reviewed its systems. Advocacy groups doubted the speed of that turnaround and raised concerns about whether the toy truly passed a full safety review.
They argued that many AI toys rely on systems that can easily fail when a child pushes the conversation into unpredictable areas.
Other AI toys showed similar problems when aggressively prompted. Some suggested locations of household hazards. Others collected sensitive information without clear disclosures. Toy experts warned that children may treat these devices as trusted companions. This makes unfiltered replies especially dangerous because kids rarely question information from something designed to feel friendly.
Groups like Fairplay and PIRG demanded tighter regulation, transparent data practices, and mandatory testing before AI toys enter stores.
The issue extends beyond content. Many AI toys store names, voices, and behavioral patterns. Without strong protections, that data becomes vulnerable to breaches or misuse. Meanwhile, large companies move into the space, with Mattel already preparing AI-linked products. This growth increases pressure on regulators to create rules that match the pace of the market.
The rapid arrival of AI toys should push companies, lawmakers, and parents to reconsider how much trust any connected device deserves. The Kumma incident reminded the public that child safety must guide innovation, especially when the technology can speak directly to the youngest users.








