How Philanthropy Can Use the “AI Moment” to Build More Just Futures Today
Charley Johnson, Michelle Shevin / Mar 26, 2025
Hanna Barakat & Cambridge Diversity Fund / Better Images of AI / Colossal Harvest / CC-BY 4.0
Since the release of OpenAI's ChatGPT in late 2022, tech-curious philanthropists have asked two questions: “What should we do about AI?” And: “How can we build proactive capacity when we have to play defense to mitigate the real harms produced by the private sector?” In 2025, these fundamentally reactive questions have reached a fever pitch, and the harms produced by the private sector are now being matched by the United States government. Government data is being fed into “AI” systems while government agencies that administer key public services are being gutted.
As a funder and a practitioner both working directly on how algorithmic technologies impact social systems, we believe that we need to ask better questions, ones laden with fewer presumptions about what is and will be. “What should we do about AI” situates technology as the starting point. If we’re going to change the path that we’re on, we can’t start with the technology and then imagine what the world might look like with that technology in it. We have to instead envision the world that we want and then map backward to determine whether/how to include AI at all.
Lessons from our work – in collaboration and in community with others – suggest an alternative way forward that expands the aperture of what’s possible for philanthropy. We know that many philanthropic funders tend toward humility amidst complexity, and that they want to jettison straightforward technical solutions to linearly-framed problems in favor of a sociotechnical approach. They recognize that coalitional power is necessary for transforming power in systems, and building new ones. In this two-part series, we invite these funders to think differently about the “AI moment” and toward the possibility of self-determined futures for communities that are too often acted upon by technology innovation. This first piece argues that we’ve misidentified the most critical infrastructure for AI: it’s not technical infrastructure like compute power, it’s relational and translational infrastructure. In part two, “How Philanthropy Can Use the ‘AI Moment’ to Build More Just Futures Today,” we argue that changing systems requires transforming power dynamics and mapping backward from the future.
Part One: The Infrastructure We’ll Need for the Possibility of Democratic AI Futures
Build Relational Infrastructure
When we think of “AI infrastructure,” we immediately imagine GPUs and data. But when we think about the infrastructure required for systems change, we must focus instead on relationships. In “The Relational Work of Systems Change,” Katherine Milligan, Juanita Zerda, and John Kania call relationships “the essence and fabric of collective impact,” adding that they can “form new avenues for innovation to address the social problem at hand.” This might sound obvious. But in philanthropy, building relational infrastructure is rarely resourced. This is problematic because changing any system — as advocates of collective impact will tell you — starts by getting the system in the room. As the systems theorist Brenda Zimmerman once said, “The most important unit of analysis in a system is not the part (e.g. individual, organization, or institution), it’s the relationship between the parts.” If we can change the relationship between the parts of a system, we can change the system itself.
Changing the relationships between the parts of a system requires altering how the parts orient to one another, how they see and understand one another, how they connect, and interact etc. Data & Society’s Public Technology Leadership Collaborative (PTLC) is a peer-learning community of academics, practitioners, and government leaders who cultivate trust-based relationships to ensure that data and technology serve the public interest. This is a strategic attempt to change the relational dynamics in a sociotechnical system. For example, government leaders are rarely invited to spaces where they don’t have to represent their institution. In turn, they typically speak from talking points, share less, and attempt to mitigate all possible risks. But the PTLC encourages stakeholders to take off their role-identity hats and show up as people first — as peer-learners and co-problem solvers. And it dedicates significant time to participating stakeholders — not just government leaders but academics and practitioners too — getting to know one another as people, and sharing one another’s context. So that participants understand the constraints one another face, the incentives that guide their actions, and the way in which they show up in the broader system. These practices and norms are a small step toward making previously hidden dynamics within the system visible.
Similarly, Stop LAPD Spying works with community members to help them understand the negative impacts of surveillance technology. They’ve found it’s not enough to bring people into a shared space of wanting to stop a particular technological intervention. Instead, they work to interrogate the relationships between local public education institutions, government agencies, and the financial interests of major businesses — as they call it, the “algorithmic ecology” of data-driven policing. In a workshop they’ve been refining for six years, the organizers bring stakeholders into a process of mapping the relationships between actors in the problem space, considering their own positionality within the problem system, and elucidating the systems dynamics that lead to the continued over-surveillance of certain people in certain places. This opens up new avenues for not only resistance and refusal of harmful technologies, but also reimagining the relationships that build sustainable power for ethical decision-making and durable community norms.
Philanthropy can play a key role in helping ensure there is physical, digital, and social infrastructure for people to be in relationship with each other as they build their awareness of systems. When we relentlessly prioritize “efficiency” defined as time- and cost-savings, or when we assume that technologies that disintermediate relationships by default can positively transform systems, we are resourcing transactional infrastructures rather than relational ones. Cultivating spaces and durable opportunities for people to convene, converse, learn, dance, move, be moved, and discover shared stakes in problems and possibilities – meaningfully builds agency, democratic capacity, and participatory power.
Build Translational Infrastructure
Once relational infrastructure is in place, the question becomes: how do you translate across difference – different expertise, perspectives, experience, epistemologies? The typical answer to this question is some version of “capacity building,” wherein an expert of some kind conducts a training or two with interested participants. But this woefully insufficient because, as we’ve learned through the PTLC, translation across difference is an ongoing conversation amongst peers. Truly understanding the perspective of another — how their expertise, skills, and epistemology converges to shape how they see a sociotechnical challenge — takes time and iteration. It is a looping process and ongoing practice, not a linear one that results in a credential.
This isn’t about establishing what is “true” to develop a shared objective reality, and then acting accordingly. In contrast to linear positivist approach to communication, systems work assumes many things may be true at once for different people within different parts of the system. This often looks like letting go of being correct while leaning into finding shared stakes and priorities, toward collaborating on a shared vision of the future. In this way, we can “get on the same page” using slightly different maps, while using each others’ perspectives to discover additional entry points for action and intervention within the system.
It’s not just that stakeholders from different vantage points might see a given challenge differently — it’s that seen through a sociotechnical lens, as communications scholar Mike Ananny explains, we’re not “seeing” the problems, we’re making them. Whether we individualize a problem or emphasize its structural elements; whether we consider it a technological problem or a social one; what frames we use and whose experience is legitimized — all of these things contribute to the "making” of a given problem. The work of building translational infrastructure is in seeing how different stakeholders make a given problem, and then offering them a container to work across those differences to remake the problem space together. For example, when the Twin Cities Innovation Alliance interrogated a proposal in Minnesota to build a predictive model designed to provide “early warning” on undesired outcomes for “at-risk” communities, they found a well-intentioned effort to better understand who might be impacted by well-worn patterns of policing and marginalization, with the idea that “diversion” supports could be offered to those community members. However, similar applications had proven over and over again that using data to label someone as “at-risk” was more likely to function as an outcome that was being planned on, rather than something the system could be made to avoid. Is the problem that the state didn’t know who was “at risk” for criminalization, or that the system interactions and dynamics that created the underlying data that presumably trained the predictive system – over-policing, economic precarity, etc. – made certain people “at risk”? These two different framings would lead to very different interventions.
The good news for the Twin Cities Innovation Alliance was that public sector leaders proposing the development of a predictive model were genuinely interested in the problem space. It took months of dedicated translation work, but, guided by the shared vision of community members who had spent time together to deeply understand the problem space (including engaging in fun activities like “algorithmic improv”), local leaders moved from an attitude of “the community doesn’t understand the technology being proposed” to “this expertise is critical and additive, and we’re going to need ongoing infrastructure for the work of transforming systems together.” This significant shift was a result of community broadening the aperture to both resist and refuse the illegitimate and pseudoscientific assumptions underlying the proposed tech “solution,” while also reasserting a civil rights framework that redirected attention to the roots of deeper problems that could not be addressed so linearly by technology, and while bringing multiple perspectives from people experiencing the problem space into the conversation about how to address and reimagine it.
In part two, we’ll lean into how funders can build this capacity for reimagining problems and their interventions.
Part Two: How Philanthropy Can Use the “AI Moment” to Build More Just Futures Today
This is the second in a two-part series inviting philanthropic funders to think differently about the “AI moment,” toward the possibility of self-determined futures for communities that are too often acted upon by technology innovation.
So how can funders transform power dynamics in their system, and build new ones?
Transform power dynamics
While the future is unwritten, many of its facets are “path dependent” on prior arrangements and ongoing power dynamics. We’re creating our future each day through our decisions, actions, choices, and the relationships we build. These actions and relations groove a path, which subtly instantiates power relations. Who has agency and decisionmaking authority, and who doesn’t? Who benefits from this version of reality? Who has something to lose if we changed society’s default settings, and charted a radically different path?
Transforming power dynamics starts with funders themselves. Every funder-grantee relationship is rife with power imbalances. If these aren’t named and identified, grantees will engage performatively, knowing that any misstep might have real financial implications. By acknowledging the imbalance and working to build trust-based relationships, grantees and funders can create a container that tolerates disagreement and conflict and privileges context sharing, curiosity, and mutual understanding. Refusing to name the power imbalance doesn’t remove it, it just allows power to operate invisibly: Grantees don’t raise real problems. Complexities are smoothed over. Funders hesitate to push or pry. They abdicate their power because they’ve long been accused of abusing it and overstepping.
Transforming power dynamics starts with funders, but it doesn’t end there. As Milligan, Zerda, and Kania write, “Collective impact efforts must be intentional about not replicating the power imbalances of the systems they work in.” That technologies tend to disproportionately harm those at the margins of society runs into conflict with the propensity to design for scale and bridge ‘access divides.’ Sense-making efforts should therefore center the perspectives of people who are disproportionately harmed and disempowered in understanding the challenge and designing the intervention. Many have cataloged approaches for doing this work thoughtfully, and where they might go awry. But one obvious approach is to reflect in the design of any workshop or convening the power dynamics that you want to see. For example, when the PTLC worked with a government agency to imagine what meaningful participation of affected groups looks like in the AI development life cycle, it first worked with community groups to set the agenda for that workshop and understand how they would define success.
Similarly, when the Twin Cities Innovation Alliance heard of the philanthropically funded plan to use public administrative data to predict which kids might eventually come into contact with law enforcement in Minnesota, they pushed back with their own data – information from hundreds of community listening sessions where parents and students had often shared their experiences of being profiled, over-surveilled, and criminalized in educational environments. Bringing these and other concerns to local officials through a policy brief that looked at the “school-to-prison” pipeline as a systems problem experienced disparately by certain community members not only ended plans to develop a predictive model to flag “at-risk” kids, but led to the creation of a “Transforming Systems Together” board to ensure those most impacted by public decisions would have a more built-in voice moving forward. This was a significant breakthrough resulting in durable infrastructure for community power. But even in the aftermath of this success, philanthropic funders have shown little interest in funding the expansion and replicability of tools for resistance and reimagining, even as they continue to fund ‘positive use cases for AI.’ As Aasim Shabazz from the Twin Cities Innovation Alliance puts it, “We need to reignite philanthropy’s imagination on what is possible – too often, funders are following a script that is unintentionally creating harm.”
Map backwards from the future
As Albert Einstein famously put it, “Imagination is more important than knowledge. Knowledge is limited. Imagination encircles the world.” Philanthropy’s power lies not in assembling the knowledge that allows us to react to dominant tech imaginaries, but in building infrastructure for broad participation in a project of moving collectively toward and from alternative future visions of human thriving.
This starts by imagining radical futures and mapping backward. Philanthropy has long understood the complex nature of a sociotechnical problem (e.g. those that don’t abide by cause-and-effect), but it has struggled to identify strategies that transform the broken system such that it doesn’t just return us to a more efficient version of the status quo. We believe it’s in imagining radical future systems that we push the boundaries on what’s possible today, and tomorrow. And it’s in mapping backward to the current conception of the problem that we break cause-and-effect, linear thinking, and identify sociotechnical interventions that don’t simply address the current harm but move us toward better futures.
Luckily, philanthropy need look no further than its own grantees to see that radically different possible futures are all around us, being envisioned, experienced, and lived by diverse communities around the world. This summer in the Twin Cities, organizers from communities across the United States and further abroad came together for the fifth annual “Data for Public Good Conference,” where they shared stories about engaging with decision-makers across sectors on data-driven technologies as (in the words of Dr. Chelsea Barabas) “terrains for social struggle” while learning about Afrofuturism, and while co-creating a shared vision of joy and abundance that was lived together in person in the short term. It’s in the “View from Somewhere” book deal signed recently by Dr Timnit Gebru—herself in good community with a cadre of powerful “tech Cassandras” that are uniquely positioned to help us better grok the default futures we inherit and to help us imagine past them. It’s in the professional governance ecosystems being reimagined and reinforced by groups like Digital Public, who are training “duty-bearing” professionals to participate in defining the future for their discipline. It’s in the young people at Encode Justice chapters across the country laying out an affirmative vision for technology policy.
As William Gibson is credited with saying, “the future is already here, it’s just unevenly distributed.” In fact, lots of things are unevenly distributed: resources, capacity, relational infrastructure, and power among them. This is partially why the “AI moment” in philanthropy and society more broadly strikes us as so remarkable: how ironic that a term being discussed at every conference, by every field and industry, and in every conversation about the future – the ‘thing’ supposedly going to ‘change everything’ and ‘solve complex problems’ – is also marketing mumbo jumbo for a cadre of pattern-matching and fill-in-the-blank machines that trap possibilities for the future in a story about the past.
But this path isn’t inevitable. By asking better questions, imagining alternative futures and mapping backward, transforming power dynamics, translating across difference, and building relational infrastructure, we can meet this moment while building more just, abundant, and joyful futures.
Charley Johnson is the director of the Public Technology Leadership Collaborative at Data & Society and the creator of Untangled, a newsletter and podcast about our sociotechnical world and how to change it. Michelle Shevin is the former senior program manager of the Ford Foundation's Public Interest Technology Catalyst Fund.
Editor's note: Tech Policy Press received a grant from the Ford Foundation in 2024.
Authors

