AI Is Moving Into Physical Products, And Out of Regulatory Reach
Matt Steinberg / Sep 22, 2025
Photo by Huy Hung Trinh on Unsplash.
In late July, Amazon finalized its acquisition of Bee, a startup that makes an AI-powered bracelet designed to listen to everything you say. Around the same time, Meta rolled out new features for its Ray-Ban smart glasses, which include a built-in AI assistant, and OpenAI announced a partnership with Mattel to develop interactive toys. AI is no longer something we just type to or talk to. It’s becoming something we wear, carry in our pockets, and hand to our children—moving rapidly from screens into the physical world.
While some new AI-embedded products may bring real value to people’s lives, most don’t fit neatly into any existing regulatory category—and their risks remain largely unexamined. These new products are part software, part hardware, and part something entirely new: emotionally responsive, behavior-shaping, memory-retaining hybrids. The more physical AI becomes, the more it slips through the frameworks we’ve tried to build. If policy is going to keep up, we need to expand our conception of not just how AI works, but what it is doing.
This isn’t the first time public policy has failed to keep pace with the digital becoming physical. In the early 2010s, the Internet of Things (IoT) promised a revolution in convenience, but the excitement quickly faded as the implications became clearer. Smart TVs and home assistants turned passive appliances into always-on recorders. Security cameras like Ring normalized corporate-enabled neighborhood surveillance. Toys like Hello Barbie recorded children's voices and transmitted them to third parties for targeted marketing and product development. By the end of the decade, regulators finally tried to catch up: the toy My Friend Cayla was banned in Germany as an “illegal espionage apparatus,” and months later, the FTC updated COPPA to cover connected toys and voice assistants. But by then, the IoT had already embedded a new layer of data infrastructure behind quirky hardware, and the public paid the price.
We’re now in a similar moment with AI hardware—except that the privacy concerns swirling around the IoT now look almost quaint. IoT devices simply collect and transmit data, but today’s AI products go further: they initiate interaction, simulate emotion, and build personalized engagement loops. A 2024 study found that children ages 3 to 6 were more likely to trust a friendly robot than a human. That’s especially powerful for young kids, who naturally project thoughts and feelings onto their toys. And while toys have talked back before, AI lets them do more than recite prewritten lines. They listen, then respond, adapt, and simulate relationships.
The influence of these relationships is amplified by AI’s ability to retain memory across interactions. A device that remembers prior conversations and adapts to a user's habits can create an unprecedented illusion of care and continuity. This personalization increases its influence. Transparency about what information is stored, how it’s used, and whether it can be erased is rare. As memory becomes a standard feature across devices and services, the boundaries of consent and control created by persistent, adaptive systems become harder to define.
The problem is that our rules assume technology is either a product (regulated for physical safety), a service (regulated for speech and data), or an app (governed through platforms). Embodied AI is all three, but no single regulatory agency or statute can oversee the full picture. The Consumer Product Safety Commission (CPSC) focuses on physical hazards such as sharp edges but has no framework for evaluating psychological risks in AI-powered toys. The FTC can penalize deceptive or unfair practices, but doesn’t set behavioral standards. COPPA protects children’s data under the age of 13 but doesn’t regulate what AI says to kids—or how it makes them feel.
The FTC’s recent settlement with Chinese toymaker Apitor underscores this narrowness: regulators can punish data collection violations but not the deeper psychological and behavioral harms posed by AI-powered products.
Even more concerning, there are no clear federal rules around bystander privacy. If your wearable AI records someone standing next to you, it might violate wiretap laws in some states but not others. Roughly a dozen states require two-party consent for recordings, while the rest do not, meaning the same AI device could be legal in one state and a privacy violation in another. This fragmentation creates inconsistent protections for bystanders who experience the same thing, being recorded without consent.
And beyond today’s gaps, infrastructure is emerging in ways that could make tomorrow’s risks harder to contain. Protocols like Anthropic’s Model Context Protocol (MCP) now enable chatbots to connect to outside tools, invoke APIs, and carry context across sessions. For now, that means more capable chatbots. But the same approach could extend to hardware like wearables, toys, or household devices. In that scenario, a bracelet wouldn't just run a narrow AI feature, it could act as a full agent with memory and tool-use, turning everyday objects into companions.
Several policy solutions are already on the table.Congress is considering COPPA 2.0, which would extend protections to include teens up to 16 years old, ban targeted ads, and require data deletion tools. But privacy reform alone isn’t enough. We need to consider broader solutions such as opt-outs for data collection, usage transparency for adaptive algorithms, or disclosure labels for AI systems embedded in children's products. While not perfect, the EU’s AI Act offers a model for risk-based regulation that addresses AI’s unique psychological impacts— something the US approach currently lacks.
Given the immediate risks and regulatory complexity, the FTC could publicly signal that AI in children’s products requires urgent guidance and greater coordination across agencies. This would help surface gaps where fragmented jurisdictions allow critical risks to slip through. For example, the agency could collaborate with NIST to develop behavioral safety standards or work with states to create consistent bystander privacy protections.
To regulate AI effectively, we have to update our mental model of what AI is. It’s no longer just a chatbot or a website algorithm. It’s a product, a presence, and increasingly, a companion made possible by a quiet transformation in infrastructure. AI is now able to move fluidly among tools, services, and devices, transforming from a static program into something modular, embedded, and always on. If we keep treating AI as static software with guardrails rather than as a system that occupies physical, emotional, and social space, regulators will continue to fall behind.
The question isn’t whether AI will inhabit our physical world—it already has. The question is whether rules will catch up before it becomes so embedded that governing it is no longer possible.
Authors
