Skip to main content

The Integrity Framework · Spec · v1.0

The one-page version.

For the dense citable artifact. The full version with cases, fork instructions, and moat model lives at /framework; this page is the spec a procurement reviewer or developer can read in five minutes and cite in a footnote.

Version 1.0·Last updated 2026-04-25·Originated by Startvest LLC·License CC BY 4.0

1. Preamble

This framework defines a verifiable integrity methodology for software products that incorporate AI components but are not themselves regulatory compliance products. It is designed for builder-led products operating below the high-risk threshold of regulatory frameworks such as the EU AI Act. It is free, open, and licensed under CC BY 4.0. Adoption is the point.

2. The five failure modes

The framework defends against five recurring patterns that have collapsed compliance and trust-adjacent categories.

  1. Trust-arbitrage failure. Selling certification artifacts as the product instead of underlying outcomes.
  2. Theater versus substance. Outputs that look like compliance but don't verify the underlying state.
  3. Conflict of interest. Verifier paid by the verified entity, no structural independence.
  4. Black-box AI failure. AI producing compliance outputs without humans understanding what was done.
  5. Velocity over rigor. Business pressure to ship faster than the work allows. Speed claims become trust claims become fraud.

3. Three operational layers

Layer 1: Pre-build vetoes (six)

  1. Artifact vs outcome. Outcome passes. Artifact-as-product fails.
  2. Independence. Customer pays for tooling: pass. Customer pays us to prepare AND certify: fail.
  3. Verifiability. Mechanical verification: pass. Customer attestation as proof: fail.
  4. AI accountability. AI outputs pass through documented human review: pass. AI directly to customer-facing claim: fail.
  5. Pricing-rigor alignment. Pricing tied to actual work: pass. “Unlimited audits for $X/year”: fail.
  6. TechCrunch test. Worst-case headline survivable with concrete defense: pass. Hand-waving: fail.

Layer 2: Architectural constraints (seven)

  1. Evidence chain integrity.
  2. AI output review gates.
  3. Customer self-attestation isolation.
  4. Reproducibility.
  5. Evidence retention independence.
  6. Independent verification hooks.
  7. Failure transparency.

CI-enforced where the codebase shape allows. v1.1 candidate: explicit prohibition on pre-population of attestation outputs (surfaced by the Delve case).

Layer 3: Operational guardrails (seven)

  1. Refund-on-failure clause in standard MSA.
  2. Public methodology page with version + changelog.
  3. Annual independent audit with public findings.
  4. Customer-side compliance owner identified before sale.
  5. Whistleblower channel external to the operator.
  6. Accountability community with free-tier scrutiny.
  7. Public kill criteria with specific thresholds.

v1.1 candidate: sub-processor auditor identity verification at onboarding and annual re-verification (Delve-driven).

4. The vendor scorecard

Six yes/no rows. One point per yes. Score below 5 is information.

  1. Public methodology page exists?
  2. Refund-on-failure clause in standard MSA?
  3. Independent third-party audit, annually, with public findings?
  4. Per-product INTEGRITY.md (or equivalent) in public repo?
  5. AI output review gate structurally enforced, not policy-only?
  6. Public kill criteria with specific thresholds?

See /framework#scorecard for the per-row pass / fail criteria.

5. Crosswalks

The framework crosswalks to NIST AI RMF, ISO/IEC 42001, EU AI Act, CSA AICM, SOC 2 TSC, and WCAG. Full mapping at /framework/v1/crosswalks. The framework sits in the gap those regimes leave open: integrity for AI-powered SaaS that isn't itself a compliance product.

6. License and citation

CC BY 4.0. Copy, modify, redistribute (including commercial). Provide attribution to the frozen v1.0 URL.

Plain text

Startvest LLC. The Integrity Framework v1.0. Published 2026-04-25. https://claritylift.ai/framework/v1

7. Versioning and contribution

SemVer plus Keep a Changelog. Version history at /framework#versions. Contribute via email to integrity@startvest.ai. Every framework change ships with a paired changelog entry. Silent drift is the failure mode the framework defends against.

8. What this isn't


Originated by Startvest LLC. Open for fork. Latest revision and full context at /framework. Frozen v1.0 at /framework/v1.