Inno-Craft Foresight · Version 1.0 · May 2026

The Trust Compact

A foresight framework for human–AI coexistence in the agentic era. Seven principles, four operating layers, twelve measurable indicators — offered as a public good.

By Engin Sayan, Founder & CEO of Inno-Craft LLC · 25+ years in enterprise AI · former IBM & Big-Four AI leadership · 100+ generative AI engagements

The Trust Gap

Eighty-five per cent of enterprise AI projects fail — not from lack of capability, but from lack of trust.

For twenty-five years and across more than a hundred generative AI engagements in banking, insurance, healthcare, public sector and industry, one pattern repeats: the technology works, but the surrounding social architecture cannot absorb it. The Trust Compact is the framework that gives that architecture shape.

~30%
of LLM outputs contain unverifiable factual claims
63%
of CIOs cite AI governance as the principal barrier to adoption
€30M
maximum fine under the EU AI Act for non-compliance with explainability and audit obligations
The Framework

Seven principles. Four layers. Twelve metrics.

The Compact is not a regulation, a product, or a certification. It is a framework for agreement — general enough to be adopted across sectors and jurisdictions, specific enough to be measured.

The Seven Principles

PRINCIPLE 01

Verifiability

"Every action taken by an AI system must be verifiable independently of the vendor that produced it."

PRINCIPLE 02

Explainability

"Every decision must carry a reason that the person affected by it can read and comprehend."

PRINCIPLE 03

Reversibility

"Where an AI action can be reversed, it must be reversible; where it cannot, it must require human authorisation."

PRINCIPLE 04

Proportionality

"The authority granted to an AI system must be proportionate to the consequences of its actions."

PRINCIPLE 05

Pluralism

"No single vendor, model, or value system shall unilaterally define the criteria by which trust is assessed."

PRINCIPLE 06

Continuity

"Trust must be preserved across model upgrades, vendor changes, and generational transitions."

PRINCIPLE 07

Accountability

"Behind every AI action stands a human signatory who bears responsibility for it. 'The AI did it' is not a defence."

The Four Layers

Most existing AI governance frameworks address only the middle two layers. The Compact's claim is that the trust gap persists in the layers they ignore.

04

Societal — Public Legitimacy

Civil society, citizens, media. Concern: democratic legitimacy of AI deployment.

03

Regulatory — Compliance & Audit

Regulators, auditors. Concern: demonstrability of legal and procedural compliance.

02

Organisational — Governance & Roles

Enterprise leadership, CIO, DPO, AI lead. Concern: operational accountability.

01

Individual — Trust Calibration

End-user and any person materially affected. Concern: comprehension and recourse.

The Twelve Metrics

Trust is not a single quantity. The Compact specifies twelve heterogeneous indicators — continuous, ordinal, and binary — to resist optimisation at the expense of substance.

#LayerMetricUnit
1IndividualTrust Calibration Score0–100
2IndividualUser Override Rate%
3IndividualExplainability Comprehension Rate%
4OrganisationalAI Action Audit Coverage%
5OrganisationalHuman-in-the-Loop Ratio%
6OrganisationalIncident Response Timeminutes
7RegulatoryCompliance Pack Coverage%
8RegulatoryAudit Trail Completeness%
9RegulatoryCross-vendor Portability0 / 1
10SocietalPublic Disclosure Index0–10
11SocietalMulti-stakeholder Review0 / 1
12SocietalDemographic Impact Audit0 / 1
Three Futures

The Compact is designed for the futures we want — not the future we drift into.

Strategic foresight maps three plausible ten-year landscapes for human–AI trust. The Compact is built to operate across the Mediated and Sovereign trajectories, and to give civil society leverage against drift toward the Extractive one.

A · Extractive AI

Global platform vendors own the trust infrastructure. Trust becomes a subscription service. Digital sovereignty erodes. Innovation outside the platforms becomes uncertifiable.

B · Mediated AI

Multi-stakeholder trust compacts emerge as public goods. Standards bodies, regulators, industry coalitions, and civil society jointly author the criteria. Open standards interoperate.

C · Sovereign AI

Jurisdictions and communities define their own trust standards. Cross-border AI requires translation between regimes. Pluralism is preserved at the cost of some friction.

Evidence

This is not theory. The framework rests on a reference implementation that has been measured.

The Compact's reference implementation, IC-GATE (PDF), was piloted over twelve thousand AI-generated outputs across two enterprise use cases — an AI-powered financial report analysis agent and a healthcare triage assistant.

68%
reduction in hallucinations reaching end-users
87/100
average Trust Score across all evaluated outputs
100%
policy rule compliance — zero non-compliant outputs
<120ms
processing overhead per output — negligible latency impact

Beneath the IC-GATE pilot sits a longitudinal portfolio: more than one hundred enterprise generative AI engagements between 2018 and 2026, across banking, insurance, healthcare, public sector, retail, and industrial manufacturing — in Türkiye, the EU, the UK, and the GCC. From this portfolio, the Compact codifies nine recurring failure patterns (the Audit Mirage; the Hallucination Ownership Vacuum; the Vendor-Tethered Evaluator; the Explanation-for-Insiders; the Override Without Memory; the Compliance Pack Cliff; the Demographic Blind Spot; the Civil Society Absence; the Upgrade Discontinuity). Each principle of the Compact addresses one or more of these patterns; each metric is calibrated to detect them.

Reference implementation — Detailed architecture, pilot results, and the commercial deployment model for IC-GATE are documented in the IC-GATE pitch deck (PDF).
Ten-Year Horizon

From early adopters to civic infrastructure.

The Compact's most significant ten-year impact is the institutionalisation of trust in AI as a measurable, shared, multi-stakeholder property — analogous to financial audit standards in the twentieth century, environmental impact assessment, and information security certification.

2026
—28

Early implementation

EU AI Act-regulated enterprises establish reference implementations. Compact-aligned metrics begin to appear in annual disclosures.

2028
—31

Sectoral compacts

Healthcare, Finance, and Public Sector Trust Compacts emerge — preserving the seven-principle backbone with sector-specific refinements.

2031
—35

Standards convergence

ISO, IEEE, and national standards bodies formally recognise compacts. Enterprise procurement begins to require adherence — the role ISO 27001 plays today in information security.

2035
+

Cross-border infrastructure

Mutual recognition agreements among jurisdictional compacts emerge. The concept of an "AI passport" enters policy discussion.

An Open Framework

Trust in AI is not a product. It is a public good.

The Trust Compact is published as an open framework. Any party — enterprise, regulator, civil society organisation, academic institution, individual researcher — is welcome to adopt, adapt, criticise, or extend it. Engagement is invited.

Licence: Creative Commons Attribution 4.0 International (CC BY 4.0)
Version: 1.0 · May 2026 · A multi-stakeholder advisory board will govern Version 2.0 onward