With AI Agents, 'Memory' Raises Policy and Privacy Questions
Kevin Frazier, Joshua Joseph / Sep 29, 2025Whether AI agents are useful hinges on their ability to collect and act on their 'memories' of users. That introduces a number of policy questions, write Kevin Frazier and Joshua Joseph.

Talking to AI 2.0 by Yutong Liu & Kingston School of Art / Better Images of AI / CC by 4.0
Technology companies are stepping forward to persuade us to start relationships with 'AI agents,' giving them our information and trust in exchange for the promise of an AI tool that can reliably take care of everything from monotonous personal tasks to important work projects on our behalf. Unlike the AI chatbots you may use today, AI agents are supposed to autonomously act on your behalf — taking multi-step actions in pursuit of a goal, like buying an airline ticket, with minimal direction. Leading AI labs such as Google, OpenAI, and Anthropic, as well as a range of startups, are working hard at developing such agents. If their product schedules are realized, AI agents will become ubiquitous. And, if their plans are achieved, these agents will become a core part of daily life.
In this scenario, every prompt you enter, detail you disclose, and task you assign an AI agent will become part of its “memory,” or the "ability of an AI system to retain, recall, and use information from past interactions to improve future responses and interactions." These memories are typically discrete inferences the agent concludes about the user, stored as natural language and used by the agent as context for future requests. In theory, agents will use that information to handle personal and professional tasks with ease and little to no oversight.
A digital executive assistant?
In practice, there’s a set of questions consumers need to ask before agents become ubiquitous and policies and norms set by tech companies become fixed. The best way to explore the issues raised by AI agents retaining memories is to consider the analog equivalent. Here’s our hypothetical:
Say you’ve worked with the same executive assistant for years. They know you prefer the aisle to the window seat on flights. They anticipate when you’re in need of a coffee and arrange it to be delivered. They can recite your social security number, last three home addresses, and your mother’s maiden name like it’s their ABCs.
You never explicitly told them to infer your needs. Nor did you direct them to hold on to your sensitive information. But you’ve never had cause for concern — it’s just so convenient. It never crossed your mind that they could theoretically remember those details long after your working relationship ends — it just seems unlikely that they’d had a reason to divulge that information. In short, you’ve formed a dependence on your assistant and have done the math that the many pros of having a nearly all-knowing executive assistant exceeds the cost of seemingly hypothetical risks.
A chance to switch jobs comes up. Better pay. Fancier title. Shorter commute. It’s a no brainer. The hang up? You wouldn’t be able to bring your assistant. At first you think that it’s no big deal. You can just help someone new get up to speed, right? Surely your current assistant could easily and accurately write down everything they know about your quirks and pass that along to the next person?
It’s at that moment your current assistant sends you a text. It’s a reminder to listen to that podcast your boss mentioned a few days ago. It’s something you would have never remembered and a little thing that can make a big difference with your podcast-crazed CEO. You realize that you’d be willing to pay handsomely to have your assistant come with you to the new role.
But exactly how much would you pay? Will the assistant be as helpful in a new company with different rules and norms? Will they be able to adjust your old routines and expectations to a new environment? You’re not exactly sure, but you’d at least like to have a chance to give them a shot.
Looking back to the world of AI agents, if you think an executive assistant can come to understand your ins and outs better than you do, now imagine how indispensable it may be to have an agent that has spent years, if not decades, reading all of your correspondence, examining every credit card statement, perusing each email sent and received, and so on. And, now think about how some of the questions discussed carry over into the policies and expectations we should have for how and why these agents function.
Which memories should be stored?
First, what information about you should be collected by AI agents? On one end of the spectrum, you could envision agents retaining all information you submit and that it observes — prompts, documents, data from third party apps, such as your email or Slack messages — as well as the information it can infer from that trove of data. The assumption would be that if you share it, the agent will learn from it, remember it, and use it. This approach would maximize the agent’s ability to handle tasks on your behalf, rendering it even more indispensable.
It would also raise a number of concerns. For example, would the inferences drawn by the agent be accurate and, perhaps more importantly, would they align with your preferences and priorities? An agent might pick up on your habit to visit the local pub after a day of more than three hours of meetings and proactively reserve a table for you whenever it sees a full day on your calendar. How might this conflict with your goal to cut back on drinking?
On the other end of the spectrum, you could imagine the agent regularly asking whether you would like it to retain certain facts. After each prompt, it might give you an opportunity to specify which information to store, for how long, and for what purposes. In this case, you’d maximize control over your memories while also adding a tremendous amount of friction to your use of the tool. This tedious process could soon drive you to simply give up on regularly using the AI agent or it could sap your willingness to guard certain information and simply default to sharing everything (as many of us do with cookies).
It could also lead to unintended consequences. Imagine asking an AI agent to order lunch for you. Perhaps you directed the agent to forget your peanut allergy a few months ago, or perhaps the agent simply did not surface that information when taking on this task (after all, even an agent can only “know” so much when asked to perform tasks quickly). But, of course, you have since forgotten about your concern with sharing medical information with the agent. Now that spring roll with peanut sauce that sent you to the hospital could have been avoided had you opted for a less hands-on approach to memory retention.
The central tension, then, is between convenience and control. A memory-rich agent may feel indispensable, but only if users can trust that its recollections are both accurate and aligned with their interests. That trust is fragile. Should an agent be free to remember everything unless told otherwise, or should its default posture be selective amnesia? The answer carries real implications not only for safety but also for usability.
Beyond the individual tradeoffs, several broader concerns deserve attention. Who, exactly, can access your agent’s memories — just you, or also the lab that designed it, a future employer who pays for integration, or even third-party developers who build on top of the agent’s platform? Should you be able to designate entire categories of life as “off-limits” for memory, much like drawing a curtain around certain activities? If so, how granular should those categories be — finance, health, intimate relationships, political views? Might a bad actor be able to extract sensitive information about you through a prompt injection attack (see the “Novel capabilities, novel risks” section of the ChatGPT Agent announcement) or even hijack your agent?
There’s also the problem of external contributions. Could a colleague, spouse, or platform directly add entries to your memory profile — “Jonathan prefers short replies to emails” — without your knowledge? Relatedly, could your agent simply infer things from data it collected from your conversations with other people that may be inaccurate, or something you’d prefer be omitted? If the agent does store those pieces of information, how would you know what’s been logged, and how would you correct or delete it? Reviewing and auditing one’s digital memory could itself become a new form of personal administration, raising questions about transparency and oversight.
In short, deciding what should be remembered is not just a question of personal preference; it’s a question of governance. Without careful design and clear rules, we risk creating agents whose memories become less like a helpful assistant and more like a permanent surveillance file.
Which controls should you have over your memories?
If deciding what an agent remembers is one problem, deciding how much authority you have over those memories is another. Here, three issues stand out: portability, accuracy, and retention. Each raises a different set of policy challenges, all tied to a deeper question: do you own your digital memories, or merely rent them?
Imagine trying to leave your job but discovering your executive assistant is contractually barred from joining you. That’s the risk if memories cannot move with you from one AI platform to another. A simple copy-and-paste transfer would seem like common sense, but companies may resist: they can argue that the insights their systems have drawn from your data are proprietary, or that moving memories introduces security concerns. The reality is that restricting portability creates enormous “switching costs.” If moving to a rival agent means starting from scratch, users will be effectively locked in — a dynamic antitrust lawyers would recognize as a modern twist on classic market power. The fight over portability is therefore not only about convenience, but also about competition.
A second issue is whether you can edit what your agent knows about you. Some changes may feel trivial: swapping your listed hometown, adjusting a phone number, maybe even knocking a few years off your official age. But once agents become conduits to doctors, insurers, banks, or government services, accuracy takes on legal weight. Misstating your weight could affect medical advice; altering your salary history could alter tax filings. At the same time, individuals may have good reasons to curate or reshape their profile — to protect themselves from discrimination, for instance, or to experiment with alternative identities. The challenge for policymakers is to distinguish harmless self-presentation from edits that could undermine trust in entire systems.
A third concern is how long memories should last. Should your agent retain a perfect archive stretching back decades, or operate more like a human assistant whose recall fades with time? Endless retention creates a gold mine for convenience — and a minefield for privacy. The longer memories linger, the more likely they are to be misused, subpoenaed, or breached. A rolling expiration date, by contrast, sacrifices continuity for safety but risks deleting context you may one day need. And even if deletion is promised, can users trust that memories are purged from every backup and training set? Without enforceable obligations, “forgetting” may be more illusion than fact.
Other questions lurk at the edges. Should you be able to name an heir to your memories, passing them along as part of your digital estate, much as Google’s Inactive Account Manager allows? Could memories become a tradable asset, bought and sold like intellectual property, opening a market for personal histories? Or should they be treated more like medical records — protected, inalienable, and tightly controlled?
The answers will shape not just how individuals relate to their agents but also how society understands the boundary between person and machine. Control over memory is, in many ways, control over identity. And unless users are given meaningful rights to carry, correct, and curate their memories, they may find themselves not the masters of their digital lives but the subjects of them.
How to prevent anti-competitive practices in the age of agents
The more indispensable agents become, the more tempting it will be for dominant firms to shape the market in their favor. We’ve seen this movie before. You saw it when Apple restricted third-party payment systems in its App Store. And, you saw it when social media platforms acquired or buried nascent potential rivals. The shift to AI agents is not just a technical leap; it’s an opportunity for the largest players to entrench their dominance. Three questions stand out.
Will agents be bundled with other services? When Microsoft bundled Teams into Office365, consumers may have been denied meaningful choice in team management platforms. This two-for-the-price-of-one argument may look good on paper to users and, yet, may have prevented rivals like Slack from gaining a stronger foothold in the market. Now imagine a world where you cannot use your favorite email client, calendar app, or even smart home devices without also using the company’s AI agent. The appeal of “it just works” integration will be obvious, but so will the dangers. Consumers may find themselves locked into an ecosystem where the agent is not just helpful but mandatory. Should regulators require companies to offer “agent-free” versions of their products, or at least give consumers the option to substitute a rival agent, much as telecom rules mandated carriers to accept any compatible handset?
Will portability rights be real or illusory? Telecom again offers a lesson. Before number portability was required, switching carriers meant adopting a new phone number. Crazy, we know. This hassle discouraged competition and gave incumbents leverage. AI agents could create similar frictions if companies refuse to let users carry their memories elsewhere. Imagine getting a new GPS device that will import every street you’ve ever driven but won’t carry over your saved routes, favorite locations, or traffic patterns. You’re left with a bare map, stripped of the personalizations that made navigation seamless. Should portability be enforced by law, with clear rules on what counts as a transferable memory — not just raw data, but also patterns and insights? Or will firms hide behind claims of intellectual property to keep your digital self trapped?
Finally, will agents understand memories the same way? Even if portability is guaranteed and memories can technically be read across platforms, meaningful transfer requires more than just data compatibility. A user may export their memories from one agent only to find that a rival interprets them differently or makes the import deliberately cumbersome. One agent's inference that you “prefer emails that get quickly to the point” might become another's "ignores email niceties." The challenge isn't whether the data can be read (it’s just natural language text data) but whether the semantic understanding and contextual relationships transfer intact. Should policymakers require standardized schemas for how memories are structured and labeled across platforms? Or should they focus on preventing artificial friction, like requiring users to manually re-enter each memory or limiting bulk imports?
Without addressing these subtler barriers, portability rights may technically exist while users find themselves practically unable to preserve the richness of their digital assistant's understanding when they switch providers. Too much standardization too early could freeze innovation. Too little, and users may be locked in by technical incompatibility. Should policymakers treat agent memories like health records — requiring open standards to ensure continuity of care — or like consumer electronics, where competing formats are left to the market to sort out?
An alternative may be to leverage the agents' own capabilities: rather than mandating data formats, regulators could require providers to enable agent-to-agent knowledge transfer. Your old agent could create a comprehensive handoff document or engage in an extended conversation with your new agent, answering thousands of questions to convey not just facts but context, patterns, and nuances. This would let providers compete on how effectively their agents can extract and absorb information from rivals, creating market incentives for genuine portability. But this approach raises new challenges: How do we ensure such transfers don't inadvertently leak confidential employer information alongside personal preferences? Should employment contracts explicitly address whether your work-related agent memories can follow you to a new job? Who bears liability if an agent misrepresents or corrupts information during the handoff? Could a provider deliberately train their agents to be “bad teachers” to rivals?
These questions point to a familiar regulatory dilemma: whether to intervene early to keep markets open, or wait for problems to metastasize and then try to unwind entrenched power. With agents, the timeline to intervene to ensure competition and innovation is likely a short one given how quickly these systems boast that they can learn and how deeply they may embed themselves into our daily routines. Hesitation risks flipping the script: instead of consumers directing their agents, agents will be directing their consumers.
Conclusion
AI agents promise to become our most trusted aides — keepers of our habits, our histories, and even our hopes. But their power derives from memory, and memory is where the risks lie. Who decides what is remembered, how long it is stored, and whether it can move freely across platforms will determine not just consumer convenience, but also the balance of power between individuals and corporations. We have seen what happens when regulators wait too long: monopolies calcify, innovation slows, and users lose control. If policymakers want AI agents to be a tool of empowerment rather than enclosure, they must act quickly to set rules for memory. Otherwise, the very technology designed to serve us may end up scripting our choices for us.
Authors

