Metrics Methodology

Understanding YCQ Sonate's fairness-aware metrics and trust scoring methodology

← Back to Platform Overview

Core Metrics Explained

FAR

FAR-A / FAR-H

First-Attempt Resolution (AI-only vs Human-involved)

Definition

Percentage of interactions resolved without requiring follow-up or escalation, measured separately for AI-only flows (FAR-A) and flows involving human intervention (FAR-H).

Sample Calculation

FAR-A: 440 AI-resolved / 500 AI-only attempts = 88%

FAR-H: 168 human-resolved / 200 escalated cases = 84%

Why This Matters

  • • Prevents penalizing humans for handling complex cases
  • • Shows true AI capability vs human expertise value
  • • Enables fair performance comparisons
  • • Identifies optimal AI/human collaboration patterns
LOI

LOI

Learning Opportunity Index

Definition

Measures routine task volume removed per human worker, allowing focus on high-value activities that develop skills and expertise.

Sample Calculation

Before AI: 120 routine tasks/day per person

After AI: 35 routine tasks/day per person

LOI: (120-35)/120 = 71% routine work automated

Business Impact

  • • Measures human role evolution, not replacement
  • • Quantifies upskilling opportunities created
  • • Tracks productivity gains from AI augmentation
  • • Supports workforce development planning
PFI

PFI

Process Fairness Index

Definition

Fairness-adjusted performance score that accounts for complexity mix and learning opportunities. Normalizes performance metrics based on case difficulty distribution.

Sample Calculation

Raw Performance: 75% success rate

Complexity Adjustment: +8% (high-difficulty cases)

Learning Factor: +10% (skill development value)

PFI Score: 1.18× baseline performance

Fairness Principles

  • • Accounts for case complexity distribution
  • • Recognizes learning and development value
  • • Prevents discrimination against expertise areas
  • • Enables fair AI/human performance comparison
TIS

TIS

Trust Integrity Score

Definition

Percentage of sampled AI interactions whose complete hash-chain verified successfully, proving cryptographic integrity of the audit trail.

Sample Calculation

Sampled Sessions: 1,000 interactions

Hash Verification Passed: 993 interactions

Hash Verification Failed: 7 interactions

TIS Score: 993/1000 = 99.3%

Trust Assurance

  • • Mathematical proof of data integrity
  • • Tamper detection for audit compliance
  • • Cryptographic verification of AI decisions
  • • Immutable evidence for regulatory reporting

Methodology & Assumptions

Sample Data

  • Sample Size: 700 interactions over 30-day period
  • AI-Only Sessions: 500 interactions
  • Human-Involved Sessions: 200 interactions
  • Complexity Distribution: 40% routine, 35% moderate, 25% complex
  • Verification Sample: 1,000 hash-chain validations

Key Assumptions

  • • Human escalation indicates higher case complexity
  • • Learning value increases with case complexity
  • • Routine tasks provide minimal skill development
  • • Hash-chain integrity ensures audit trail validity
  • • Performance normalization prevents bias against expertise

Note on Sample Data

Metrics shown are representative examples for demonstration purposes. Actual deployment metrics will vary based on use case, data volume, complexity distribution, and organizational context. YCQ Sonate provides customizable baseline calibration for each implementation.