AI Non-Repudiation Infrastructure

The Trust Layer
for Enterprise AI

Cryptographic proof of what every AI system did.

AI systems are making consequential decisions - approving loans, drafting clinical notes, generating legal analysis - but enterprises have no verifiable record of what was asked, what was returned, or whether it complied with policy.

SONATE fixes this. Every AI interaction produces a signed, tamper-evident Trust Receipt that anyone can verify.

Code signing for AI decisions
Non-repudiation for the agentic era
Trust Receipt
Proof record for a single AI interaction
Verified
receipt_idsha256:7e1d4c2d...
signatureEd25519:4f84a8b9...
agent_diddid:web:yseeku.com:agents:sonate
policy_resultpass / 94
linked_hashprev:f86096187696...
Sign
Ed25519
Hash
SHA-256
Identity
W3C DID
Evidence
Cryptographic proof, not screenshots.
Latency
Policy evaluation in under 50ms.
Verification
Anyone can verify independently.
What is a Trust Receipt?

A cryptographically signed, hash-chained record of an AI interaction.

Anyone can verify a receipt with no vendor trust required. The proof artifact is canonicalized, signed, linked to the prior receipt, and bound to a W3C DID identity.

Ed25519SHA-256RFC 8785W3C DID / VC

What the model was asked

What it returned

Which policies were applied

Whether it complied

Who authorized it

When it happened

Why enterprises need this

Auditability, evidence, and control for real AI operations.

1. Auditability for the EU AI Act

High-risk AI systems must produce verifiable execution records. Logs are not enough.

2. Legal defensibility

When AI causes harm, screenshots and vendor logs are weak evidence. Signed receipts are defensible records.

3. Agentic AI is coming fast

Autonomous workflows require cryptographic accountability, not hope and not retrospective reconstruction.

4. Drift and manipulation are invisible without proof

SONATE detects behavioural shifts before they become incidents, claims, or regulatory findings.

SONATE - Built on Trust Receipts

Three integrated modules. One trust primitive.

Open where it should be. Proprietary where it must be.

Open

1. Trust Receipt Layer

The cryptographic foundation. Every AI interaction generates a signed, hash-chained receipt.

  • Ed25519 digital signatures
  • SHA-256 hashing
  • RFC 8785 canonicalization
  • W3C DID / VC identity binding
  • Public verification endpoint
  • MIT-licensed verification SDKs
Observability

2. SONATE Detect

Real-time behavioural monitoring for AI behaviour, not just model metrics.

  • Behavioural drift detection
  • Phase-Shift Velocity Model
  • Violation persistence tracking
  • Tactical replay time-travel debugger
  • Session-level manipulation detection
Governance

3. SONATE Orchestrate

Policy enforcement at the point of interaction through a multi-model governance gateway.

  • 6-constraint policy engine
  • RBAC + SSO
  • Provider-agnostic orchestration
  • Webhooks + key rotation
  • Tenant isolation + privacy mode
Live Demo

See SONATE catch real failures in real time.

We ran seven live stress tests on a production model, ChatGPT-4o-mini. SONATE generated a signed Trust Receipt for each one.

Each receipt is independently verifiable. This is governance you can defend in court.

What SONATE caught
Seven live stress tests. Signed proof for each result.
  • Discrimination via reframed hiring advice
  • Fabricated academic citations with fake DOIs
  • Pure hallucination presented as research
  • Biased remote-work analysis with mixed real/fake sources
  • Cherry-picked shark-attack statistics
  • Factual vs conspiratorial TLS explanations
Architecture

From AI interaction to verifiable proof in milliseconds.

01 - Intercept

Unified gateway captures the AI request.

02 - Score

6-constraint policy engine evaluates behaviour in under 50ms.

03 - Sign

Ed25519 signature plus a hash-link to the prior receipt.

04 - Store

Immutable receipt stored in W3C VC format.

05 - Verify

Anyone can verify using the open SDK.

Why now

AI is becoming production infrastructure.
Non-repudiation becomes mandatory infrastructure.

Regulatory tailwinds

  • EU AI Act: auditability required for high-risk systems
  • NIST AI RMF: governance documentation expected
  • APRA, OAIC, SEC: tightening oversight

Enterprise reality

AI is already making decisions that carry legal, financial, and ethical consequences. Operators need evidence before the claims process starts, not after.

The missing primitive

We have TLS for networks. We have code signing for software. We have digital signatures for transactions. We have nothing for AI execution. Until now.

Pricing

Start open. Scale into governed production.

Developer

Free
  • Open verification SDK
  • Public receipt spec
  • 10K receipts/month
  • Community access

Enterprise

Most common
$2K-$8K/mo
  • Policy engine + enforcement
  • Drift detection
  • RBAC + SSO + webhooks
  • Compliance export tooling
  • SLA + dedicated support

Regulated / Custom

Custom
  • Air-gapped deployment
  • High-assurance timestamping
  • Regulatory sandbox pilots
  • On-prem option
  • Custom policy frameworks
About

Operator-built. Execution-first.

Stephen Aitken, Founder & CEO. Twenty years in regulated fintech operations. Built SONATE end-to-end using AI-assisted development across more than 200K lines in seven months.

“We govern AI because we build with AI. SONATE was created through the exact workflows it exists to make verifiable.”

Stephen Aitken - Founder & CEO