Proof of Quality

Open infrastructure delivering verifiable quality signals across existing AI workflows, without replacing tools or processes.

Book a Call
Sapien Protocol ASCII Art

Human-Powered AI Verification

Don’t trust, verify. PoQ adds a verifiable quality signal where subjective judgment matters.

Why PoQ

AI systems fail not because models are weak, but because subjective quality is poorly enforced. Without a quality signal, you get drift, errors and confident responses that are not reliable.

The onchain world offers two rules that help in this environment: don't trust, verify, and make sure incentives push people toward honest behaviour. PoQ brings those ideas into AI with staking, local consensus, onchain attestations and built-in data sovereignty.

PoQ produces a trusted signal. Your system decides what to do with it.

Core Concepts

Oracles

Connection between PoQ and external systems used by validators/contributors. They handle interaction with tools, agents, and workflows.

Validation

Independent judgments are collected and weighted by stake to reach local consensus.

Attestation

A verifiable onchain record that downstream systems can audit and trust. Immutable provenance for training and RL loops.

Protocol Roles

Originator creates tasks. Contributor does the work. Validator reviews submissions. Adapters connect PoQ to external tools.

Data Sovereignty

Your data stays in your bucket. PoQ only verifies quality without requiring you to hand over control.

Local Consensus

Validator committees reach agreement on quality. Aligned validators earn rewards, others are slashed.

How PoQ WorksDefine QualityCreate a Task Definition Spec (TDS)that defines what 'good' looks like.Submit DataSubmit data points to be evaluated,labeled, or improved.Task RegisteredTask registered onchain, funded with$SAPIEN, opened for contributors.Contribute & ValidateContributors stake and submit work.Validators review via commit-reveal.ConsensusStake-weighted consensus determinesacceptance. Validators earn or slash.Attest & ExecuteAttestation written onchain. Yoursystem consumes the quality signal.Trusted OutputQuality you can trust.Provenance you can prove.

Who PoQ is For

PoQ is designed for builders who need enforced quality, not just output evaluation.

Agent builders (LangChain, LangSmith, Onchain Agents, custom stacks)

Teams building autonomous or semi-autonomous agents where actions have real cost.

  • Need a confidence signal before executing high-stakes actions
  • Want quality enforcement without hard-coding rules
  • Running agents in production where failures are expensive or irreversible

RLHF & HITL Pipeline Operators

Teams managing reinforcement learning or human-in-the-loop systems at scale.

  • Collect subjective judgments that must be reliable, not noisy
  • Require consensus instead of single-reviewer approval
  • Want auditability and provenance for training and evaluation loops

Tool Builders & Platform Integrators

Teams building tools, frameworks, or platforms used by AI developers.

  • Want to embed quality as a first-class primitive in their product
  • Can extend PoQ through adapters instead of bespoke integrations
  • Benefit from shared infrastructure rather than rebuilding QC systems

ML & Deep Learning Model Builders

Training models that require high-quality labeled datasets with provable annotation quality.

  • No cryptographic proof of annotation correctness
  • Manual spot-checking doesn't scale across datasets
  • Need verifiable quality guarantees for model training
  • CVAT lacks economic incentives for quality

What You Get

A trust layer for subjective tasks
Local consensus instead of single-reviewer noise
Data stays in your bucket
Onchain provenance for training and RL loops
A simple way to verify model output or agent behaviour
A protocol you can build on, not a platform you are locked into

Ready to Build with PoQ?

Whether you're building adapters, integrating PoQ into your AI pipeline, or exploring new use cases - let's discuss how we can help.

Book a Call
Sapien Token Icon