Home

Donate
Perspective

When Governments Let Algorithms Decide Prices

Renjie Butalid / May 11, 2026

In April, Ontario Premier Doug Ford was asked a simple question: Would Ontario follow Manitoba and ban surveillance pricing on groceries? His answer was equally simple. No.

He believes, he said, in a free-market, capitalist society. Competition, not regulation, is the best driver of lower prices. If collusion is happening, he will tear retailers "to shreds." When pressed on whether that meant he was comfortable with grocery stores using surveillance pricing on basic food items, he was clear: "I did not say that." He simply would not act against it.

The problem is that Premier Ford is describing a different problem from the one Ontarians actually face, and his response reflects a pattern playing out across governments in North America. Collusion is illegal, and the Competition Bureau of Canada already has tools to address it. Surveillance pricing is something else entirely, and a competitive market offers no protection from it whatsoever.

What surveillance pricing actually is

Surveillance pricing, also called personalized or algorithmic pricing, is when a company uses your personal data to determine not what a product is worth, but what you, specifically, are willing to pay for it. Your purchase history. Your location. Your browsing behavior. The time of day you tend to shop. Whether you bought diapers last month and will almost certainly need them again.

This is not the same as a sale. A sale is public: the price on the shelf is the price for everyone who walks in. Surveillance pricing works differently. The price you see was chosen for you, and nobody in the store knows what anyone else is paying. It is also not dynamic pricing in the Uber sense, where everyone caught in the same snowstorm sees the same surge. Surveillance pricing is individualized. Two people standing in the same grocery aisle, reaching for the same box of cereal, may be offered different prices based on what the algorithm knows about each of them. The shopper deemed less price-sensitive, or less mobile, or more habitual, pays more, with no indication that this is happening and no recourse when it does.

According to the Competition Bureau of Canada, more than 60 companies offered services using pricing algorithms in the country as of last year. The practice is expanding from travel and ride-hailing, where consumers have grown accustomed to it, into grocery retail, rental housing, and financial services. These are sectors that the Competition Bureau’s own public consultation identified as areas of growing concern, where the stakes are considerably higher and the data considerably more intimate.

Why the market cannot fix this

Premier Ford's instinct is that competition disciplines pricing, and in many contexts it does. It rests on a foundational assumption, though: that consumers can make informed comparisons. Surveillance pricing is specifically designed to make that impossible.

The price you see is the price the algorithm decided you should see. You do not know what your neighbor was offered. You have no way to know whether you are being charged a premium or on what basis. That opacity is not incidental. It is how the system is designed to function.

When federal NDP Leader Avi Lewis tabled a motion to ban the practice, he called it a system designed to "spy on Canadians and gouge them even more." Barry Sawyer, National President of UFCW Canada, whose union represents thousands of retail workers already seeing these practices on the ground, put it plainly: these systems are "designed to maximize profit, not fairness." A market built on this kind of information asymmetry cannot be disciplined by consumer choice alone.

Algorithmic pricing systems also learn to identify and extract more from consumers who have fewer alternatives: those in lower-income brackets, in food deserts, with limited mobility, or with dependents whose needs are predictable and inelastic. The algorithm does not set out to discriminate. It simply optimizes for revenue, and the people with the least room to push back may end up paying more for it.

A recent Abacus Data poll of nearly 2,000 Canadians found that 52 percent want surveillance pricing banned outright, and another 31 percent want it more strictly regulated. Only a small fraction of Canadians support leaving it unaddressed. This is not a fringe concern. It is a majority view, grounded in something economists tend to treat as irrational, but that turns out to be foundational to how markets earn public trust: the belief that the same product should cost the same for everybody.

What Ontario's decision actually means

When Manitoba Premier Wab Kinew's government moved to introduce legislation banning retailers from using personal data to raise prices for specific consumers, the practice had not yet been widely documented locally. The immediate trigger was a US Consumer Reports investigation that found Instacart's AI-enabled pricing tools were charging different customers different prices for identical grocery items, with price variations reaching as much as 23 percent. Instacart's own internal communications described one tactic as "smart rounding." The province moved to act before the harm arrived rather than after.

That is what good governance looks like on emerging technology risks: establishing the norm before it is needed, not scrambling to reverse infrastructure that has already been built, data that has already been collected, and habits that have already been normalized. The longer a practice operates without a legal boundary, the harder it becomes to draw that boundary.

Ontario is Canada's largest consumer market and home to some of the country's most sophisticated retail and financial services operations. A decision not to regulate here does not simply leave Ontarians without protection. It signals to the industry that the largest provincial market will not intervene in how algorithms are deployed against the people who shop in it. That is a consequential signal, whether or not it was intended as one.

Ontario NDP Leader Marit Stiles tabled a non-binding motion urging the Ford government to ban the practice. It was voted down by 58 Conservative MPPs. Premier Ford did not show up to vote at all. Interim Ontario Liberal leader John Fraser has since tabled a private member's bill proposing the same. The legislature's opposition, across party lines, is pointing in one direction. The government is pointing in another.

The absence of regulation is itself a decision

Since 2018, the Montreal AI Ethics Institute has tracked the gap between what AI systems are designed to do and what we collectively decide they should be allowed to do. Through The AI Ethics Brief and our annual State of AI Ethics Report, published since 2020, we have documented how that gap widens quietly, in places where the public is not yet looking. Surveillance pricing sits in a specific and particularly dangerous part of it: profitable, technically legal, not yet widely understood, and directly contrary to the basic intuition that markets should be fair.

The question Premier Ford was really being asked was not about capitalism. It was about whether Ontarians deserve to know when an algorithm is using their personal data to charge them more than their neighbor for the same product, and whether they have any right to push back when it does. "Let the market decide" is not an answer to that question. It is a decision to let the algorithm decide instead, and to call that freedom.

Manitoba has introduced legislation to draw a line. Ontario's opposition parties are pushing for the same. On April 15, federal MPs voted down a motion to ban the practice nationally, with both Liberals and Conservatives opposing it. Ontario's Premier made his position clear the following day, and then did not show up when his own legislature voted on it. That is worth understanding for what it is: not a defense of markets, but an abdication of responsibility to ensure markets work for the people who depend on them.

Authors

Renjie Butalid
Renjie Butalid is Co-Founder of the Montreal AI Ethics Institute (MAIEI) and VP Business Development at Metrika, an enterprise risk management platform serving global financial institutions.

Related

Analysis
How States Are Taking on Algorithmic PricingMarch 20, 2026
News
Big Tech’s AI Shopping Tools Raise Stakes for ‘Surveillance Pricing’ LawsJanuary 22, 2026
Perspective
How to Test New York’s Algorithmic Pricing LawNovember 25, 2025

Topics