Phosra Spec / Capability

PCSS v1.0 — Draft

Phosra Verdict

Algorithmic transparency, audit, and dark-pattern interventions.

What Verdict does

One audit-ready answer to “how does the algorithm decide what to show kids.”

Every law that asks “how does the algorithm decide what to show kids” — KOSA’s algorithmic transparency duty, KOSMA, CA SB 976 (addictive design), EU DSA Article 28, NY S9051 (AI engagement dark-pattern ban), the EU AI Act’s risk classifications — needs an answer that’s both human-readable and machine-auditable.

Verdict is the canonical algorithmic-transparency + intervention layer. It disables infinite scroll and autoplay for minor accounts, blocks engagement dark patterns, weights re-rankers toward civil-society “prosocial” / “role model” / “diverse representation” dimensions, generates DPIA evidence packs on request, and produces the periodic algorithmic-audit report regulators require.

A platform’s response to “show me your algorithmic-decision audit trail for minors in jurisdiction X” goes from a months-long discovery exercise to a generated PDF. Parents see what’s being recommended and why. Phosra’s Notary signs every decision so the audit trail is court-defensible.

How partners plug in

Verdict is a socket. Recommender events flow in. Audit-ready evidence flows out.

These are the upstream auditors, schemas, and rating dimensions Verdict publishes against — either shipping today, in conversation with a partner, or pending an upstream pilot.

Powered byPhosra Verdict
Mozilla Foundation + academic algorithmic-audit researchers — independent verificationPending pilot
Powered byPhosra Verdict
EU DSA Art. 28 compliance toolkits — pre-built reporting schemaDesign partner candidate
Powered byPhosra Verdict
Civil-society AI accountability rubric — “Be Transparent & Accountable” dimension (in conversation)Mappings shipped

Standards & laws

What Verdict does for each statute.

  • KOSA + KOSMA — fulfills the algorithmic-transparency duty for covered platforms.
  • CA SB 976 (addictive design) — enforces autoplay-off + infinite-scroll-block for minor accounts.
  • EU DSA Article 28 (minor protection) — produces the periodic transparency report on minor exposure.
  • NY S9051 (AI engagement dark patterns) — blocks engagement-driven dark patterns in AI products.
  • EU AI Act (high-risk classification) — surfaces the “you are talking to an AI” disclosure on every Nth turn.
  • FTC algorithmic-transparency consent decrees — generates the agency-required audit pack.
  • CA AB 2273 (AADC, Phase 2) — enforces the children’s-design-code on algorithmic surfaces.

Conformance

Adopter Tier 1 certification.

To ship Verdict-conformance for an Adopter Tier 1 certification, your implementation must pass the Verdict suite. Test count is [draft] coming Q3 2026. The suite covers autoplay/infinite-scroll suppression for minors, dark-pattern detection accuracy, DPIA evidence-pack generation, and signed audit-trail emission to Notary.

We are co-authoring the suite with our design partners. If you want a seat at the table while the bar is being set, reach out.

Rule list

The 11 rules Verdict ships

Every rule below is implemented by this capability. Pulled directly from the rule registry.

  • Prosocial Weight
  • Role Model Weight
  • Representation Weight
  • Dpia Request Generator
  • Algorithm Feed ControlDisables personalized algorithmic feeds and switches to chronological or non-profiled content delivery.
  • Addictive Design ControlDisables autoplay, infinite scroll, notification streaks, and other compulsive-use design patterns.
  • Infinite Scroll Block
  • Autoplay Disable
  • Ai No Engagement Dark Patterns
  • Ai Is It Real Disclosure
  • Algorithmic Audit

Implementing Phosra Verdict? Talk to us.

production·294717f·main·