The 1970 Law That Solves AI’s Legitimacy Crisis
James Andrews / Jan 6, 2026James Andrews is a business professor at Ohlone College.

The United States Capitol around 1970 (Wikimedia Commons, CC BY 3.0)
My profession, accounting, is built on the precise use of language. It runs on a five-century philosophy of order and verification, enforced by federal agencies, state boards, private standard setters, and a licensure system that carries the threat of civil and criminal liability. Institutions such as the FASB, the IASB, and the SEC exist to define language and meaning. Yet even inside that fortress of linguistic discipline, the word “operating” means one thing on the Income Statement and something different on the Statement of Cash Flows.
Artificial General Intelligence (AGI) is the hypothetical ability of a computer program to perform any intellectual task a human can. Tools built on that premise are already being deployed, treating a term like “operating” as a statistical symbol. But even Claude Shannon, whose mathematical theories contributed to machine learning, would contend that, although language can be represented as signals that could be measured and encoded, such an approach has inevitable constraints. A model’s context window can find and measure patterns, but it can’t provide the grounded meaning that institutions such as medicine, finance, or law require.
Indeed, AI systems built on Shannon's mathematics have hit a structural limit. They can optimize signal patterns, but they cannot supply the semantic and epistemic structures that institutions depend on. Meaning must be defined, maintained, and governed by accountable human authorities.
Often, their deployment isn’t intentional. Research from the Federal Reserve shows that most workplace AI enters through informal employee use rather than formal adoption, yet the impact is institutional. No governing standards to define meaning, no audit trails, no accountable authority, no one legally responsible for outcomes. Just statistical pattern matching across incompatible contexts, deployed at scale in decisions that shape people's health, welfare, and freedom. When these systems fail, the response is always the same: more data, bigger context windows, more compute.
We’ve seen this movie before
In the late 1960s, the credit reporting industry was a nationwide predictive system that blended qualitative and quantitative data to score a person’s creditworthiness. It touched nearly every American’s economic life, yet operated with opaque methods, inconsistent terminology, limited disclosure, and no meaningful accountability. Private bureaus were informing consequential decisions about employment, housing, and lending while lacking even the most basic safeguards.
By 1970, the public had had enough. The Fair Credit Reporting Act (FCRA) passed the House of Representatives 302–0 and was signed into law by President Nixon. It required no advanced compute, no machine learning, and nothing resembling modern infrastructure. FCRA effectively built what today we would call an epistemic layer.
First, it established definitions for many terms, such as “consumer report” and “file.” Before FCRA, credit bureaus, courts, and lawmakers used different words for the same concepts. Shared language requires cooperation and commitment to common definitions. FCRA forced both.
Second, it imposed accuracy standards on consumer reporting agencies. They had to standardize their procedures, apply them consistently, and maintain controls to ensure their reports reflected those procedures rather than ad-hoc judgment.
Third, it created an audit trail and correction process. Consumers gained the right to access their files, challenge errors, and require investigations, and agencies had to trace disputed items back to their sources and correct mistakes within defined timelines.
FCRA governed the meaning, rules, and responsibilities around a predictive system long before modern AI existed and proved that such governance is both feasible and essential.
The FCRA worked not because Congress mandated transparency into proprietary scoring models. The government never asked how those algorithms operated or required disclosure of internal formulas. Instead, it forced accountability.
For example, in a 2017 case, a plaintiff won a $2.5 million judgment against a loan servicer without proving malicious intent, only that the firm was repeatedly notified of erroneous reporting and failed to correct it as FCRA required. Here, enforcement relies on affected individuals policing system output. Ignoring consumer complaints and failing to meet statutory correction duties carries substantial legal liability. Applied to AI, this same structure would assign responsibility to firms for system outputs, requiring them to understand the sources of user-reported errors and correct them rather than hiding behind model complexity or disclaimers.
With FCRA, Congress regulated responsibility, not calculations. The Act empowered adversely affected individuals to challenge outputs. That shift forced credit agencies to govern their inputs, definitions, and procedures if they wanted to remain compliant. Congress understood that both meaning and accountability come from people, not patterns. When credit reports are wrong, consumers know. When AI outputs affect lives, affected parties can identify errors even if the system remains opaque.
FCRA’s power lies in how it reshaped risk-mitigation strategy. Credit reporting agencies learned not to report outputs they could not explain or correct. That same epistemic framework is portable across domains.
But what does governance actually look like in practice?
Institutional policy prescription: Build the architecture first
W. Edwards Deming, a Shannon contemporary in post-war systems thinking, used statistics to model organizational systems while proving that math alone could never govern them. Quality required human cooperation around processes that were explicitly defined, continuously disciplined, and relentlessly measured.
AI systems violate this framework at every point. They operate without defined inputs, controlled processes, or accountable authority. When outputs fail, the fix is always the same: more examples, more compute, and more hope. Deming would have recognized this immediately: management malpractice.
Institutions can only realize AI’s potential by using it to augment human judgment, not replace it. And that requires building governance architecture first. The principles aren’t new; they’re foundational to computer system design and have been central to management science for over a century.
- Define Institutional DNA. Explicitly define key terms, data elements, and business rules, with named executives responsible for approving them. Systems must be deterministic, not probabilistic.
- Measure Semantic and Epistemic Coherence. Track whether language is used consistently across the organization (semantic coherence) and whether AI outputs align with approved rules and authorities (epistemic coherence). You cannot govern what you cannot measure.
- Establish Auditability. Every decision must be reconstructable: inputs, sources, and rules applied. If you cannot trace how a conclusion was reached, you do not have governance.
- Use AI to Improve Governance, Not Replace It. User interactions reveal edge cases, policy conflicts, and gaps in institutional knowledge. Treat this as a feedback loop for strengthening business rules and refining the semantic layer.
The choice is simple: build the architecture that preserves institutional meaning, or allow systems that erode it. There is no middle ground.
Leaders must learn from the past
The stakes are not abstract. FCRA emerged from public anger at companies that treated people as data points to be sold. When credit errors cost people jobs, homes, and marriages, the industry treated those harms as regrettable but unavoidable. Congress responded unanimously. AI now operates across every consequential domain: health, education, finance, national security, etc. Its errors will reach further and cut deeper. At the same time, it has genuine potential to expand human capability if it is governed properly.
History settled the argument about governance and accountability. We have a blueprint. Building an epistemic layer didn’t weaken the credit reporting industry; it made it scalable. The same firms that warned FCRA would destroy them became powerful multinational, publicly traded companies. Credit became more accessible, safer, and more standardized precisely because FCRA forced agencies to earn public legitimacy.
AI is now at the same crossroads, and the stakes are even higher. It has already embedded itself into organizational decision-making, whether leaders prepared for it or not. With proper governance, it can strengthen institutions. Without it, institutional meaning will erode along with public trust. The choice is straightforward: build the architecture now, on institutional terms, or wait for public backlash to sweep away the good with the bad.
Authors
