Home

Donate
Perspective

The Privacy Challenges of Emerging Personalized AI Services

Mark MacCarthy / May 28, 2025

May 26, 2025: An introduction to Google's AI Mode experiment on the screen of a Google Pixel smartphone. Shutterstock

One of the salient features of the emerging world of advanced AI services will be an extreme emphasis on personalization. In order for an AI chatbot to give users answers that fully respond to their enquiries, it must have detailed user profiles specifying their interests, preferences, and needs. To perform the tasks assigned to it, an AI agent must have even more detailed user profiles.

This will set off a race among providers of AI services for massive amounts of detailed user information, much of which is highly sensitive, including information about religion, political affiliation, sexual orientation and proclivities, medical conditions, as well as more traditional marketing information such as musical tastes, brand preferences, and fashion favorites.

A number of developments are worth noting in this coming age of personalized AI. One is the merging of AI and search, resulting in the end of search as a stand-alone service that returns a list of links in response to user queries. Another is the astonishingly Orwellian world envisioned by the AI labs as they develop their advanced AI services. Bentham’s panopticon and Black Mirror’s worst fantasies pale before the comprehensive privacy invasions enthusiastically contemplated by Silicon Valley leaders. The third is how policymakers can meet the privacy challenges of a world of personalized AI.

AI and search are merging

The nature of the search business will change substantially in this world of personalized AI services. It will evolve from a service for end users to an input into an AI service for end users. In particular, search will become a component of chatbots and AI agents, rather than the stand-alone service it is today.

This merger has already happened to some degree. OpenAI has offered a search service as part of its ChatGPT deployment since last October. Google launched AI Overview in May of last year. AI Overview returns a summary of its search results generated by Google’s Gemini AI model at the top of its search results. When a user asks a question to ChatGPT, the chatbot will sometimes search the internet and provide a summary of its search results in its answer.

According to an OpenAI filing in the search case, by December of last year, ChatGPT search slightly exceeded AI Overview’s 595M daily queries. A December 2024 study from Statistica found that the two of them dominated the emerging AI search market. These developments have stimulated analysts to create new methods of calculating Google’s market share, which, traditionally and in the government’s antitrust case against Google, was estimated at around 90%. Reuters notes that a Bernstein analyst placed Google’s market share at “65% to 70% when accounting for usage of AI chatbots.” According to Wells Fargo analysts, Google's market share “could fall to less than 50% in five years.” In 2024, Gartner predicted that traditional search volume would drop 25% by 2026 because consumers will start using AI search instead.

Google’s stock dropped 7.3% last month when an Apple executive testified in the Google search trial that searches on Safari, which uses Google search exclusively, dipped for the first time in twenty-two years. He attributed the decline to people using AI. In response, Google had to explain that total query volume continues to grow across Apple devices, which includes searches from the Google app. But it is clear that Google search is now competing against AI search.

Google took a further step to respond to chatbot competition at its developers I/O conference, where it announced that its experimental AI mode of search would be free and available to all. This service “will provide a conversational, question-and-answer experience akin to OpenAI’s ChatGPT, rather than a traditional list of links.” Google’s search engine would provide input into its Gemini AI model, which would generate answers to user queries; however, the 10 blue links would be eliminated in this AI mode of search.

Microsoft seems to be taking steps in the same direction. It recently announced the withdrawal of its Bing API, which allows other firms to receive search results from Bing. Instead, it now offers an AI agent service that provides summaries of internet searches rather than raw search results.

Google’s head of search says that good AI models “are now able to get around” the traditional structure of a search results page. They can “find and synthesize information from lots of sources.” As tech journalist Casey Newton writes, “the search results page is a relic” of the past.

The new personalized AI services

Chatbots thrive on a diet of personal information. A Google official recently said, “…the more that they understand your goals and who you are and what you are about, the better the help that they will be able to provide.” All the major AI companies are seeking to increase the size of long-term memory in order to store “user profiles and preferences to provide more useful and personalized responses. For example, a chatbot may remember whether a user is a vegetarian and respond accordingly when providing restaurant recommendations or recipes.”

User search history can be harnessed in the service of this personalization goal. Google says that Gemini “can also use your Search history (if you've opted in) to provide more personalized and relevant responses.”

OpenAI founder and CEO Sam Altman captured the promise and the privacy threat of all-knowing chatbots when he described his ideal of a “very tiny reasoning model with a trillion tokens of context that you put your whole life into.” He went on,

This model can reason across your whole context and do it efficiently. And every conversation you’ve ever had in your life, every book you’ve ever read, every email you’ve ever read, everything you’ve ever looked at is in there, plus connected to all your data from other sources. And your life just keeps appending to the context.

This Orwellian vision of a panopticon AI device tracking every movement of a user’s life took one step closer to reality with OpenAI’s acquisition on May 21 of former Apple designer Jony Ive’s company, io. Ive, a chief architect of the iPhone, and his design team will develop a new consumer AI device for OpenAI subscribers. Existing devices like laptops and mobile phones, says Sam Altman, are “not the sci-fi dream of what AI could do.” The new pocket-size device OpenAI is planning will be screen-free, the Wall Street Journal reports, and will be “fully aware of a user’s surroundings and life, will be unobtrusive, able to rest in one’s pocket or on one’s desk…” Another analyst suggested that “users will be able to wear the device around their necks” and that it will be equipped with microphones and cameras that can analyze the user's surroundings.” It would be able to transfer the data it collected to phones and laptops.

All AI firms are also seeking to develop and market AI assistants. Assistants go beyond chatbots by performing tasks for users, rather than just answering their questions. Users can instruct them to purchase tickets to sports games or concerts, or to book a vacation or a dinner at a restaurant. If they are properly trained and know a lot about user interests, needs, and preferences, they can carry out these tasks without further instruction.

So far, the results have been uneven, but the industry hopes that improvements in reliability and alignment will be sufficient to generate significant user adoption.

Even more so than with all-knowing chatbots, AI agents raise significant privacy concerns. In a comprehensive report, the privacy think tank Future of Privacy Forum writes, “AI agents may be at their most valuable when they are able to assist with tasks that involve highly sensitive data (e.g., managing a person’s email, calendar, or financial portfolio, or assisting with healthcare decision-making).” For instance, these AI agents need to have complete and accurate personal information if they are to avoid “misrepresenting a user’s characteristics and preferences when it fills out a consequential form.”

Meta head Mark Zuckerberg has a very clear vision of the future of personalized AI agents. In a recent interview, he says, “I personally have the belief that everyone should probably have a therapist… and for people who don’t have a person who’s a therapist, I think everyone will have an AI.” He goes on to describe how a personal AI assistant will help with your friends, saying “…there’s a lot of stuff about the people who I care about that I don’t remember, I could be more thoughtful… An AI that has good context about what’s going on with the people you care about, is going to be able to help you out with this.” A good assistant will be similar to a friend with whom “…you have a deep understanding of what’s going on in this person’s life and what’s going on with your friends, and what are the challenges, and what is the interplay between these different things.”

OpenAI has a similar vision on personalized AI agents. In a 2025 strategy document filed in the Google search case, the company writes, “we'll start evolving ChatGPT into a super-assistant: one that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do.”

An AI super assistant “is all about making life easier: answering a question, finding a home, contacting a lawyer, joining a gym, planning vacations, buying gifts, managing calendars, keeping track of to-dos, sending emails.”

OpenAI plans to turn ChatGPT into a “super assistant that deeply understands you and serves as your interface to the internet. To fully be that interface, we need a search index and the ability to take actions on the web.”

Search is going to be a key component of the coming competition among AI frontier labs in providing AI assistants. Another OpenAI filing in the Google search case contains a diagram showing “Google search” and “ChatGPT today” occupying overlapping circles within a larger circle of “Super assistant.”

Anthropic has a similar plan to upgrade its AI model, Claude, to act as an agent for enterprises and professionals. Jared Kaplan, Chief Scientist at Anthropic, said in an interview in January, “Claude needs to learn enough about your particular situation and the constraints that you operate under to be useful. Things like what particular role you’re in, what styles of writing, or what needs you and your organization have. I think that we’ll see improvements there where Claude will be able to search through things like your documents, your Slack, etc., and really learn what’s useful for you.”

Achieving the level of personalization envisioned by the AI firms will require an enormous amount of personal information, raising substantial privacy issues. One Bluesky user captured the creepy feeling these business plans evoke in many, saying, “Make tech bros rewatch the specific episode of Black Mirror that they are trying to create—Clockwork Orange-style—until they understand the point of the episode.”

How policymakers can respond

US policymakers must meet the privacy challenges of these advanced AI services, including the merging of search with chatbots and AI agents. They are greatly handicapped by not having a baseline national privacy law in place, but must muddle along with some combination of state privacy law and national regulation under the Federal Trade Commission’s unfair and deceptive acts and practices authority.

The best way forward would not be to invent a sector-specific privacy regime for AI services, although this could be made to work in the same way that the US has chosen to put financial, educational, and health information under the control of dedicated industry privacy regulators. It might be a good approach if policymakers were also willing to establish a digital regulator for advanced AI chatbots and AI agents, which will be at the heart of an emerging AI services industry. But that prospect seems remote in today’s political climate, which seems to prioritize untrammeled innovation over protective regulation.

In the absence of a sectoral AI regulator, the best way forward would be a comprehensive privacy law that provides strong privacy protections across the board, including on the data collection and use practices of AI companies. A bipartisan privacy coalition in Congress has come close several times to passing such a law, and a new effort prompted in part by the challenges of emerging personalized AI might succeed.

But some government measures might make the privacy challenges of personalized AI services even more challenging and are beyond the power of legislation to address. In the Google search antitrust case, the Department of Justice (DOJ) has proposed a data access remedy that would force Google to transfer its users’ search data to rival search companies. If not done right, these forced data transfers could make privacy matters worse.

The merger of search and AI means that the stand-alone service that search is today will be consigned to the dustbin of history. And this, in turn, means that the recipients of Google’s treasure trove of search data under the proposed data access remedy will not primarily be small dedicated search companies like DuckDuckGo, but the frontier AI labs OpenAI, Meta, and Anthropic, seeking to build a search component for their increasingly personalized AI services.

These companies will have every incentive to transform Google’s transferred detailed search histories into personalized data that can be merged with the giant profiles they are creating to provide personalized AI services. As a result, the privacy protections that DOJ requires as part of its data access requirement will have to be extraordinarily robust in order to overcome the natural drive of these Google rivals to fully exploit the data that has been put in their hands, not by user consent, but by governmental fiat.

Three additions to the data access provisions in the remedies might more effectively protect privacy while still providing search rivals the data they need to develop their own search algorithms. The first is a requirement for reasonably effective de-identification before the data is transferred. The second privacy safeguard is that the Google search rivals are permitted to receive Google de-identified user data only on the condition that they agree to make no attempt to re-identify the data. The third privacy protection is that the technical committee set up by the DOJ to supervise the implementation of the antitrust remedies, including its privacy protections, must always contain a privacy expert able to ensure that the privacy obligations on Google and its search rivals are adequate and fully enforced.

The brave new world of personalized AI services might be a privacy nightmare to many. But it is likely on the way, and policymakers must prepare to respond effectively – allowing the services to flourish if they can find users and protecting privacy along the way.

Authors

Mark MacCarthy
Mark MacCarthy is an adjunct professor at Georgetown University in the Graduate School’s Communication, Culture, & Technology Program and in the Philosophy Department. He teaches courses in technology policy, including on content moderation for social media, the ethics of speech, and ethical challen...

Related

AI and Epistemic Risk: A Coming Crisis?June 10, 2024
Perspective
Before AI Agents Act, We Need AnswersApril 17, 2025

Topics