Why Technology Won’t Save Us Unless We Change Our Behavior
Frenk van Harreveld / Jul 14, 2025
Digital Nomads: Digital-Based Connection by Yutong Liu & Digit—Better Images of AI / CC by 4.0
We can design greener tech, smarter AI, and healthier systems—but unless people use them, trust them, and stick with them, they won’t matter.
Climate change, overstretched healthcare systems, and the rise of artificial intelligence are among the greatest challenges of our time. We often turn to technology for solutions: cleaner energy, more efficient healthcare, safer algorithms. But innovation is only half the story. The other half is us—and our behavior.
Even the most promising technology fails if people don’t use it, understand it, or trust it. Green products have to be purchased and applied. Preventive health tools only work if lifestyles change. AI systems can boost efficiency, but only if users engage critically and responsibly. More often than not, the bottleneck is not in what we can build, but in what people actually do.
Take Google Glass. Smart glasses were marketed as a breakthrough in wearable tech, but failed with the public—not because the technology didn’t work, but because it felt intrusive, elitist, and socially awkward. A less spectacular but telling example is the slow rise of meat alternatives. These products are technological feats—sustainable, innovative, and increasingly available. And yet, many people still default to eating meat, often due to habits, social norms, or price.
Why good intentions often fail
Behavioral science helps explain why. At the dinner table, we say we want to be healthier and more sustainable. But in the supermarket, we often reach for what’s easy, tasty, and cheap. That’s rarely the choice that’s also healthy or green. Our decisions are strongly shaped by immediate rewards, while long-term goals—like protecting the climate or avoiding chronic illness—often fade when it’s time to act.
In my research, I’ve found that long-term intentions are particularly fragile when we’re in a “hot” state—hungry, tired, or emotionally aroused. People who are tired or sexually stimulated are far more likely to make impulsive decisions and abandon longer-term values. It’s not hypocrisy. It’s how the human brain works: the here and now dominates, while distant goals are forgotten.
Designing for sustainable decisions
This phenomenon—what psychologists call the “hot-cold empathy gap”—makes it hard to act on our best intentions unless the context actively supports us. The solution isn’t simply to tell people to try harder. It’s to create environments and systems that help them succeed.
That psychological distance can also be a key to change. Behavior shifts when the benefits of good choices feel closer—and the costs of bad ones more immediate. In a study on plastic use, we found that prompting momentary guilt helped people make more sustainable choices. Not to shame them, but to bring the issue emotionally into the present.
Design also matters. In another study, we found that people were more likely to buy green products if they looked recognizably sustainable. Why? Because those choices signaled something about who they are. “Green to be seen” isn’t a punchline—it’s a strategy. Make the right behavior visible, and it becomes socially valuable.
This is part of a broader class of influences known as social norms or normative influence. When sustainable, ethical, or healthy behavior becomes the standard in one’s social group, it is far more likely to spread. People aren’t just rational agents—they’re social animals. And social proof matters.
Behavioral lessons from the pandemic and EVs
Health shows a similar pattern. Smoking has become socially unacceptable. Fitness and clean eating, meanwhile, are now status symbols. What used to be about long-term benefit has been reframed to deliver short-term social rewards—admiration, self-image, belonging. That’s when behavior really shifts.
Artificial intelligence may seem like a different story, but the underlying behavioral dynamics are familiar. People embrace AI for writing, health tracking, shopping, or social media because it’s fast and frictionless. The rewards are immediate. But the risks—privacy loss, algorithmic bias, dependence—feel abstract and far away. That imbalance makes critical, ethical use harder. And AI doesn’t just reflect our behavior – it actively shapes it, through nudges, personalization, and addictive loops.
The gap between technology and behavior became especially clear during the COVID-19 pandemic. The limited success of vaccination campaigns, mask-wearing, and social distancing was less a matter of available technology and more a matter of behavioral adoption. And those behaviors were influenced by trust, emotion, and identity as much as by facts. In some places, appeals to communal responsibility and social solidarity worked. In others, where messaging became moralized or politically loaded, resistance grew.
We’ve seen similar dynamics in efforts to promote energy conservation and the adoption of electric vehicles. In Norway, for example, generous subsidies and dedicated road lanes helped boost early EV adoption. But social status and visibility played a large role too. When people saw Teslas in their neighbors' driveways, the vehicles became aspirational—not just ecological. Meanwhile, where EVs remain rare or mocked as elitist, uptake lags behind.
Public policy can play a transformative role if it leverages behavioral insights. Rather than relying solely on bans or subsidies, well-designed policies can nudge citizens by altering default options, reframing messages, or reducing friction. A striking example comes from organ donation: countries with opt-out systems tend to have far higher donation rates than those requiring explicit opt-in—because inertia is a powerful force.
Bridging short-term rewards and long-term goals
What feels good now might be harmful later. That’s why behavior should be a design principle, not an afterthought. We need systems where short-term ease doesn’t come at the expense of long-term responsibility.
And we need to stop relying on fear. Public messaging on sustainability, health, and AI often leans on alarm: melting ice caps, cancer warnings, data breaches. But fear alone rarely changes behavior. People shut down, feel helpless, or look away. Positive motivation is more effective: make the good choice easy, visible, and desirable.
In a recent international study on sustainable smartphones, we found that joy, excitement, and hope drove purchase intention far more than specs or brand loyalty. People chose the sustainable option not just because it was “right,” but because it felt good.
Towards a human-centered transition
Ultimately, behavioral change depends not just on individual willpower, but on how our environments are designed. That includes physical design, but also social structures, digital architectures, and policy frameworks. If we want to build a better future, we have to build for behavior.
The good news is that behavior is malleable. Unlike immutable technological laws, human behavior can shift—sometimes quickly—when the context is right. The challenge is not in changing minds, but in shaping environments that make better choices feel natural, desirable, and rewarding.
Behavior change doesn’t require a revolution. But it does require invitation. As the anarchist thinker Emma Goldman (loosely paraphrased) once said: 'If I can’t dance, I don’t want to be part of your revolution.'
If we want people to move toward a healthier, greener, more responsible society, we have to make that movement feel like something worth joining. Not through fear or guilt alone—but by making it visible, rewarding, and human.
Authors
