Skip to main content

The Integrity Framework · v1.0 · originated by Startvest LLC

A framework for building compliance products under structural commitments.

Defends against the five recurring failure modes that have destroyed compliance categories before. Three operational layers, eight moat layers, six buyer questions. Forkable. We hope you do.

The framework is generic on purpose. Adoption is the point; if it stays branded as one company's thing, it stays one company's thing. Anyone can run a product under this framework. Startvest happens to be the first.

Version 1.0·Last updated 2026-04-25·Permanent v1 URL /framework/v1·Cite this framework

What this is

A working document. It governs how every Startvest product is designed, built, and operated. It's also published openly so other operators can fork it for their own compliance and trust-adjacent products. The framework's value is in adoption; locking it down would defeat the purpose.

The framework has three operational layers (vetoes, architectural constraints, operational guardrails) plus an eight-layer moat model used as decision criteria for product portfolio fit. Each is documented below with pointers to the runbooks, inspection templates, and trust reports that operationalize them.

If you fork this for your own product, the only request is that you keep the version-and-changelog discipline. A framework that drifts silently is the failure mode the framework defends against.

What it defends against

Five recurring failure modes have destroyed compliance and trust-adjacent categories before. The framework names them so they can be defended against by name.

  1. Trust-arbitrage failure. Selling certification artifacts as the product instead of underlying outcomes. Volume-based business models destroy rigor over time.
  2. Theater versus substance failure. Outputs that look like compliance but don't verify the underlying state. Checklists checked without verification, evidence collected without inspection.
  3. Conflict-of-interest failure. Verifier paid by the verified entity, with no structural independence. The Andersen / Enron pattern.
  4. Black-box AI failure. AI producing compliance outputs without humans understanding what was done, why, or whether it's correct. Unique to current-generation compliance work.
  5. Velocity-over-rigor failure. Business pressure to ship audits or certifications faster than they can be done well. Speed claims become trust claims become fraud.

Recent industry collapses (Theranos, FTX, Delve, others) all map to one or more of these. The framework is reverse-engineered from those failures.

Layer 1: Pre-build vetoes

Six questions evaluated before a product gets built or before a major scope expansion. Wrong answer kills the product, not delays it.

  1. Artifact versus outcome. Is the value proposition selling an artifact (report, badge, score) or an outcome (actual compliance, security, audit-readiness)? Outcome passes. Artifact fails.
  2. Independence. Who pays us, and does that conflict with what we verify? Customer pays for tooling that helps them prepare for verification by genuinely independent third parties: pass. Customer pays us to both prepare AND certify: fail. Hard rule, not negotiable.
  3. Verifiability. Can we mechanically verify what we claim, or are we relying on customer attestation alone? Mechanical: pass. Attestation as proof: fail.
  4. AI accountability. When AI gets it wrong, what's the human review layer and escalation path? AI outputs pass through documented review gates before becoming attestations: pass. AI directly to customer-facing claim: fail.
  5. Pricing-rigor alignment. Does our pricing model create financial pressure to skip work? Pricing tied to actual work performed: pass. “Unlimited audits for $X/year”: fail.
  6. The TechCrunch test. Imagine the worst-case headline about this product in 18 months. Can we defend every claim, methodology, and output? Defense is concrete and survives scrutiny: pass. Defense requires hand-waving: fail.

A product failing any veto either gets reframed before build OR gets passed on. The framework is more important than any single revenue line.

Layer 2: Architectural constraints

Seven constraints baked into every product, CI-enforced where possible.

  1. Evidence chain integrity. Every compliance claim traceable to specific evidence the product collected and timestamped.
  2. AI output review gates. Any AI-generated compliance output passes through human review before becoming an attestation, report, or customer-facing claim.
  3. Customer self-attestation isolation. Customer-attested data visually and architecturally distinct from product-verified data.
  4. Reproducibility. Every audit conclusion reproducible from underlying evidence by an independent reviewer. Internal review gates test reproducibility quarterly.
  5. Evidence retention independence. Evidence supporting compliance claims retained per statutory requirements regardless of normal data lifecycle. Customer offboarding does NOT delete audit evidence.
  6. Independent verification hooks. Every compliance product has a mode where an external auditor can verify product outputs without the product mediating.
  7. Failure transparency. When the product can't verify something, it MUST say so. Never silently default to “compliant” when verification fails.

CI rules enforce these where the codebase shape allows. Evidence-chain integrity becomes a non-null foreign key. AI output review gates become required schema fields. Failure transparency becomes a forbidden-pattern check that blocks catch { return verified: true } regressions.

Layer 3: Operational guardrails

Seven business practices that prevent the model from turning sour over time.

  1. Refund-on-failure. Every customer contract includes a refund clause if our certification or verification turns out wrong because of our error or oversight.
  2. Public methodology documentation. Every product publishes the methodology by which it produces compliance outputs. Not the source code, the methodology.
  3. Annual independent audit of our own product. Once per year, every Startvest compliance product reviewed by a real third-party CPA firm or security firm. They sample our outputs. We publish findings, whatever they find.
  4. Customer-side compliance owner. Before selling to a company, identify who there is responsible for the compliance outcome. We don't sell to companies where compliance is “operations' problem.”
  5. Internal whistleblower channel. Anonymous reporting channel monitored by independent counsel. Quarterly board-level review.
  6. Community accountability pattern. Free tier or pack offered to a high-trust community that watches our work. Community will notice fakery.
  7. Public kill criteria. Every compliance product publishes the criteria under which we'd shut it down. Inverse of growth-at-all-costs.

The eight-layer moat model

Operating decision criteria used when evaluating new products to commit engineering capacity to. The moat model and the integrity framework are complementary: the framework keeps us from shipping things that shouldn't ship; the moat model picks among the things that should.

A single-layer moat is a feature, not a moat. Wendy's matched McDonald's McFlurry with the Frosty Fusion and McDonald's lost the entire matchup because dessert was their only differentiator. Yahoo had directory; Google had better search; Yahoo died. BlackBerry had keyboard; iPhone matched it eventually; BlackBerry died. Single-feature moats are death traps in slow motion.

Real moats are stacked. A competitor has to defeat multiple layers simultaneously to break in.

Layer 1

Founder credentials

What a competitor would need years to acquire. SDVOSB certification, USMC veteran status, prior security clearance, deep domain context. Veteran-owned business community access alone is a real moat for federal-contractor products.

EvaluationDoes this product benefit from credentials we already hold? If a product requires credentials we don't have and don't plan to acquire, it's a category trap.

Layer 2

Technical architecture

What the code does that's hard to replicate. Retention-zero architecture with CI-enforced privacy invariants. DCAA-compliant audit chain. Algorithm with multiple pre-scoring rules. The CI enforcement is the load-bearing piece: anyone can claim privacy; only ClarityLift can prove the claim survives a build because the build fails when the claim fails.

EvaluationCan the technical architecture be enforced at the build-time level? If the only enforcement is policy or documentation, it's a Delve-class risk.

Layer 3

Distribution

Channels of customer discovery that are hard to cut off. App-store integrations, MCP listings, vertical SEO, trade association partnerships, direct sales to known customer segments. Five compounding distribution channels are harder to disrupt than one.

EvaluationDoes this product reuse existing distribution, or does it require building new distribution from zero?

Layer 4

Integrity framework

This document and the artifacts it points at. Layer 1 vetoes, Layer 2 architectural constraints, Layer 3 operational guardrails. Public Trust Principles. Annual third-party audits. Whistleblower channel run by independent counsel. Refund-on-failure clauses. Specifically defends against the Delve failure mode.

EvaluationDoes this product strengthen the integrity framework's credibility, or does it require constraints the framework can't enforce?

Layer 5

Community accountability

Free tiers for high-trust communities. Veteran-owned business community for federal-contractor products. Disability-rights advocates for ADA compliance products. Public-sector orgs for trust-adjacent products. Public methodology documentation. Communities watching the work catch fakery faster than auditors.

EvaluationIs there a high-trust community this product serves that would care enough to watch?

Layer 6

Compounding intelligence

Pattern recognition that improves with time and can't be skipped. The scanner's accumulated scored-and-validated candidates. Each product built informs scoring. A competitor entering this space starts at month 1 of pattern recognition while we're at month 24. Time spent here doesn't depreciate.

EvaluationDoes this product feed back into the scanner or other compounding-intelligence systems? Or does it stand alone?

Layer 7

Honest brand

Commitments competitors can copy only at cost. "Won't certify our own customers." "Will refund when wrong." "SDVOSB pack free forever." A competitor matching these claims must deliver them, which means giving up shortcuts most competitors rely on. The brand is the contract.

EvaluationDoes this product's positioning require integrity commitments that lock in the brand promise, or does it sit comfortably in the standard SaaS mode?

Layer 8

Portfolio synergy

Multiple products sharing infrastructure, distribution, brand, community. Killing one product doesn't hurt the others; the portfolio absorbs the loss. Cross-pollination across products that competitors with one product can't match.

EvaluationDoes this product reinforce 6+ of the other 7 layers, or does it stand alone?

Using the model as decision criteria

When evaluating a new product or major scope expansion, score it against the eight layers:

  • Score 6-8: strong portfolio fit. Builds defense alongside the existing portfolio.
  • Score 3-5:mixed. Needs explicit justification for why the unfit layers don't matter.
  • Score 0-2: category trap regardless of immediate revenue. Pass.

This filter sits alongside (not in place of) standard product-scoring frameworks and the Layer 1 integrity vetoes. A product can score high on a feature-fit metric and still fail the moat filter; a product can pass all six vetoes and still not fit the portfolio.

Vendor scorecard

Six yes/no questions. Score any compliance or trust-adjacent vendor (including products operating under this framework) against them. One point per yes. Score below 5 is information.

Q01

Public methodology page exists?

Yes

Versioned, changelogged, written so a procurement reviewer can verify each step. Linkable URL.

No

No page. Or page exists but reads like marketing copy. Or no version + no changelog.

Q02

Refund-on-failure clause in the standard MSA?

Yes

Default contract terms. Vendor can paste the clause on request.

No

Refund only on negotiated enterprise contracts. Or no refund clause at all.

Q03

Independent third-party audit, annually, with public findings?

Yes

Real CPA / security firm. Most recent audit shareable on request. Findings public.

No

Only the audits the vendor SELLS to customers exist. Vendor itself is unaudited.

Q04

Per-product INTEGRITY.md (or equivalent) in the public repo?

Yes

Each principle states implemented / partial / not-yet, with file paths and CI rule names.

No

No file. Or file exists but every principle reads 'committed to' with no implementation pointer.

Q05

AI output review gate is structurally enforced, not policy-only?

Yes

Vendor walks through the review gate, the database column that enforces it, and the CI rule that blocks bypassing it.

No

Review gate is a policy or training. Or AI outputs reach customer-facing claims directly.

Q06

Public kill criteria with specific thresholds?

Yes

Numbers, percentages, days, dates. Written down. Linkable.

No

No kill criteria. Or vague "we'd shut it down if it stopped working" language.

Each row maps to a Layer 1 veto, a Layer 2 constraint, or a Layer 3 guardrail. A vendor that ducks any row is operating outside the framework whether they claim it or not.

Run the scorecard on this product before trusting the framework. We score 6/6 only when the operationalizing artifacts (methodology, audit, INTEGRITY.md, kill criteria) are all live and linkable. Until they are, the score is partial. We say so on Trust Principles and on each per-product INTEGRITY.md.

Live self-grades (baseline)

The framework is worthless if the operators who publish it don't score themselves against it. These are the current self-grades for the three Startvest products operating under v1.0, captured the day the framework published.

Baseline published at 2 / 6 and 3 / 6 the day v1.0 shipped. Now at 4 / 6 across all three products. Closing the remaining gaps in public.

ClarityLift

4 YES / 0 PARTIAL / 2 NO

MSA refund clause (drafted, finalization in flight). Annual third-party audit deferred pending external funding (engagement cost is currently unfunded; not relabeled as scheduled while it isn't).

FieldLedger

4 YES / 0 PARTIAL / 2 NO

MSA refund clause (drafted, finalization in flight). Annual third-party audit deferred pending external funding.

adacompliancedocs

4 YES / 0 PARTIAL / 2 NO

MSA refund clause (drafted, finalization in flight). Annual third-party audit deferred pending external funding.

Each per-product INTEGRITY.md names the gaps with file paths and target close dates. The methodology and kill-criteria rows already moved YES. The MSA refund clause is drafted and pending finalization.

Row 3 (annual third-party audit) is honestly classified. The audit is required for the row to move YES, and the engagement cost is gated on external funding that Startvest does not currently have. We are not relabeling this as “in flight” or “scheduled” while the funding does not exist; the row stays NO with this disclosure attached. When funding is secured, the row moves to PARTIAL with the audit firm name and engagement date public. It moves to YES only when an audit cycle completes and the findings publish.

Hold us to this. If a score drops without a paired changelog entry on the framework page or the per-product INTEGRITY.md, that is the failure pattern itself.

How to fork this framework for your own product

You're welcome to. Adoption is the point.

  1. Copy this fileto your product's repository or your company's documentation site. Add your own version-and-changelog discipline. Don't drop the changelog; that's how the framework defends against silent drift.
  2. Translate Layer 1 / Layer 2 / Layer 3 to your context. Some items here are Startvest-specific (SDVOSB community accountability, federal-contractor distribution) and won't apply to you. Others (refund-on-failure, public methodology, AI review gates) translate cleanly to most compliance and trust-adjacent products.
  3. Publish your version openly. A framework hidden in a private repo isn't a framework; it's a policy document. The structural commitment to publishing is what gives the framework teeth.
  4. Tell us if you fork it. No obligation. Email integrity@startvest.ai if you do; we maintain a list of products operating under variants of the framework, and that list is itself accountability for everyone on it.

No license fee. No formal license at all. The framework's value is in adoption and adaptation. Lock-in would defeat the purpose.

How to cite this framework

Citation-stable URLs. The latest revision lives at claritylift.ai/framework. Frozen versions live at claritylift.ai/framework/v1, /v2, etc. Cite the frozen URL when the wording matters; cite the latest URL when the principle matters.

Plain text

Startvest LLC. The Integrity Framework v1.0. Published 2026-04-25. https://claritylift.ai/framework/v1

Markdown

[The Integrity Framework v1.0](https://claritylift.ai/framework/v1) — Startvest LLC, 2026-04-25.

In your product's INTEGRITY.md

This product operates under [The Integrity Framework v1.0](https://claritylift.ai/framework/v1). Per-principle status: see below.

If you operate a product under the framework, link the frozen version you're operating against. When you upgrade, bump the link in your INTEGRITY.md and note the upgrade in your changelog. Silent drift is the failure mode the framework defends against.

Case studies

Real, named compliance failures walked through the failure modes and the vendor scorecard. Sourced citations only; we're not investigating, we're mapping public reporting to the framework so the failure modes stay concrete instead of abstract.

Have a case we should write up? Email integrity@startvest.ai.

Version history

  • v1.0 (2026-04-25). Initial publication. Originated by Startvest LLC. Three operational layers (vetoes, architectural constraints, guardrails), eight-layer moat model, six buyer questions, fork invitation. Hosted at claritylift.ai/framework as interim while the dedicated framework site is built. Permanent v1 URL at /framework/v1.

The framework operationalized

The framework is a working document. These artifacts make it real:

Cross-product strategy: Startvest/INTEGRITY-ROLLOUT.md. Active workplan: Startvest/INTEGRITY-WORKPLAN.md.

Contact: integrity@startvest.ai. Monitored quarterly by independent counsel.