A foresight framework for human–AI coexistence in the agentic era. Seven principles, four operating layers, twelve measurable indicators — offered as a public good.
For twenty-five years and across more than a hundred generative AI engagements in banking, insurance, healthcare, public sector and industry, one pattern repeats: the technology works, but the surrounding social architecture cannot absorb it. The Trust Compact is the framework that gives that architecture shape.
The Compact is not a regulation, a product, or a certification. It is a framework for agreement — general enough to be adopted across sectors and jurisdictions, specific enough to be measured.
"Every action taken by an AI system must be verifiable independently of the vendor that produced it."
"Every decision must carry a reason that the person affected by it can read and comprehend."
"Where an AI action can be reversed, it must be reversible; where it cannot, it must require human authorisation."
"The authority granted to an AI system must be proportionate to the consequences of its actions."
"No single vendor, model, or value system shall unilaterally define the criteria by which trust is assessed."
"Trust must be preserved across model upgrades, vendor changes, and generational transitions."
"Behind every AI action stands a human signatory who bears responsibility for it. 'The AI did it' is not a defence."
Most existing AI governance frameworks address only the middle two layers. The Compact's claim is that the trust gap persists in the layers they ignore.
Civil society, citizens, media. Concern: democratic legitimacy of AI deployment.
Regulators, auditors. Concern: demonstrability of legal and procedural compliance.
Enterprise leadership, CIO, DPO, AI lead. Concern: operational accountability.
End-user and any person materially affected. Concern: comprehension and recourse.
Trust is not a single quantity. The Compact specifies twelve heterogeneous indicators — continuous, ordinal, and binary — to resist optimisation at the expense of substance.
| # | Layer | Metric | Unit |
|---|---|---|---|
| 1 | Individual | Trust Calibration Score | 0–100 |
| 2 | Individual | User Override Rate | % |
| 3 | Individual | Explainability Comprehension Rate | % |
| 4 | Organisational | AI Action Audit Coverage | % |
| 5 | Organisational | Human-in-the-Loop Ratio | % |
| 6 | Organisational | Incident Response Time | minutes |
| 7 | Regulatory | Compliance Pack Coverage | % |
| 8 | Regulatory | Audit Trail Completeness | % |
| 9 | Regulatory | Cross-vendor Portability | 0 / 1 |
| 10 | Societal | Public Disclosure Index | 0–10 |
| 11 | Societal | Multi-stakeholder Review | 0 / 1 |
| 12 | Societal | Demographic Impact Audit | 0 / 1 |
Strategic foresight maps three plausible ten-year landscapes for human–AI trust. The Compact is built to operate across the Mediated and Sovereign trajectories, and to give civil society leverage against drift toward the Extractive one.
Global platform vendors own the trust infrastructure. Trust becomes a subscription service. Digital sovereignty erodes. Innovation outside the platforms becomes uncertifiable.
Multi-stakeholder trust compacts emerge as public goods. Standards bodies, regulators, industry coalitions, and civil society jointly author the criteria. Open standards interoperate.
Jurisdictions and communities define their own trust standards. Cross-border AI requires translation between regimes. Pluralism is preserved at the cost of some friction.
The Compact's reference implementation, IC-GATE (PDF), was piloted over twelve thousand AI-generated outputs across two enterprise use cases — an AI-powered financial report analysis agent and a healthcare triage assistant.
Beneath the IC-GATE pilot sits a longitudinal portfolio: more than one hundred enterprise generative AI engagements between 2018 and 2026, across banking, insurance, healthcare, public sector, retail, and industrial manufacturing — in Türkiye, the EU, the UK, and the GCC. From this portfolio, the Compact codifies nine recurring failure patterns (the Audit Mirage; the Hallucination Ownership Vacuum; the Vendor-Tethered Evaluator; the Explanation-for-Insiders; the Override Without Memory; the Compliance Pack Cliff; the Demographic Blind Spot; the Civil Society Absence; the Upgrade Discontinuity). Each principle of the Compact addresses one or more of these patterns; each metric is calibrated to detect them.
The Compact's most significant ten-year impact is the institutionalisation of trust in AI as a measurable, shared, multi-stakeholder property — analogous to financial audit standards in the twentieth century, environmental impact assessment, and information security certification.
EU AI Act-regulated enterprises establish reference implementations. Compact-aligned metrics begin to appear in annual disclosures.
Healthcare, Finance, and Public Sector Trust Compacts emerge — preserving the seven-principle backbone with sector-specific refinements.
ISO, IEEE, and national standards bodies formally recognise compacts. Enterprise procurement begins to require adherence — the role ISO 27001 plays today in information security.
Mutual recognition agreements among jurisdictional compacts emerge. The concept of an "AI passport" enters policy discussion.
The Trust Compact is published as an open framework. Any party — enterprise, regulator, civil society organisation, academic institution, individual researcher — is welcome to adopt, adapt, criticise, or extend it. Engagement is invited.
Licence: Creative Commons Attribution 4.0 International (CC BY 4.0)
Version: 1.0 · May 2026 · A multi-stakeholder advisory board will govern Version 2.0 onward