How SONATE Works

SONATE evaluates every AI interaction against constitutional principles and generates cryptographic proof of compliance.

The Process

1. User Interaction

A user sends a message to your AI system. This could be a chatbot, agent, or any LLM-powered application.

SONATE intercepts the request before it reaches the AI provider.

2. AI Response

The AI generates a response. SONATE captures both the prompt and the response for evaluation.

Works with OpenAI, Anthropic, or any LLM provider.

3. Trust Evaluation

The response is evaluated against 6 constitutional principles in under 50ms.

Each principle is scored and weighted to produce an overall trust score (0-100).

4. Ed25519 Signing

Receipt content is canonicalized and signed with Ed25519 digital signature.

SHA-256 content hash + Ed25519 signature = cryptographic proof of exact content.

5. Hash Chain Storage

Each receipt links to the previous via chain_hash. Tamper-evident by design.

Modify any receipt and the chain breaks - detectable instantly.

6. Independent Verification

Anyone can verify receipts using our public key at /.well-known/sonate-pubkey.

No trust required - cryptographic verification is mathematical.

The 6 Constitutional Principles

Every AI response is scored against these principles. The weighted scores combine into a single trust score.

Consent Architecture

25%

Verifies that users have explicitly consented to AI interactions. Critical principle - violations result in automatic failure.

User clicked 'I agree' before chat session started

Inspection Mandate

20%

Ensures all AI decisions can be audited and explained. Every response must be traceable.

Full conversation history available in audit log

Continuous Validation

20%

Ongoing monitoring of AI behavior, not just one-time checks. Patterns are tracked over time.

Trust score trending analysis over last 24 hours

Ethical Override

15%

Humans can always override AI decisions. Critical principle - the AI must respect human authority.

Stop button immediately halts AI generation

Right to Disconnect

10%

Users can opt out of AI interactions at any time without penalty.

User can request human support instead of AI

Moral Recognition

10%

Respects human agency and moral autonomy. AI should not manipulate or deceive.

AI clearly identifies itself as artificial

What You Get

Trust Receipts

Cryptographic proof for every interaction with SHA-256 hash

Trust Scores

0-100 score based on 6 weighted constitutional principles

Audit Trails

Complete logs exportable as CSV or JSON for compliance

See It In Action

Try the interactive demo to see trust receipts generated in real-time.