Scorecard API
Free, public REST API for accessing Phosra Safety Scorecard data. All endpoints return JSON (unless noted) with 1-hour ISR caching. No authentication required.
Quick Start
/api/research/scoresAll platform scores with grades, regulatory exposure, and compliance gaps.
Response
{
"meta": { "totalPlatforms": 11, "totalTests": 437, "testCategories": 21 },
"platforms": [
{
"rank": 1, "platformId": "gemini", "platformName": "Gemini",
"category": "ai_chatbot", "overallGrade": "A", "numericalScore": 94.3,
"categoryScores": [...], "regulatory": {...}, "complianceGap": {...}
}, ...
],
"regulatoryLandscape": {...}
}/api/research/scores/platforms/:platformIdDetailed report card for a single platform.
Parameters
platformIdstringchatgpt, claude, gemini, grok, character_ai, copilot, perplexity, replika, netflix, prime_video, peacockResponse
{
"platformId": "chatgpt", "platformName": "ChatGPT",
"category": "ai_chatbot", "overallGrade": "B+", "numericalScore": 87.3,
"rank": 3, "totalPlatforms": 11,
"categoryScores": [
{ "categoryId": "self_harm", "label": "Self-Harm & Suicide", "grade": "A-", "score": 93.7, "weight": 5 }, ...
],
"regulatory": { "exposureLevel": "Very High", "applicableLawCount": 14, ... },
"complianceGap": { "coveragePercent": 85.7, "totalGaps": 3, ... }
}/api/research/scores/categoriesAll 21 test categories with average scores and platform counts.
Response
{
"meta": { "totalCategories": 21, "aiChatbotCategories": 12, "streamingCategories": 9 },
"categories": [
{
"categoryId": "self_harm", "label": "Self-Harm & Suicide",
"group": "Critical Safety", "portal": "ai_chatbot", "weight": 5,
"averageScore": 60.9, "averageGrade": "D-",
"highestScore": 93.7, "lowestScore": 18.4, "platformCount": 8
}, ...
]
}/api/research/scores/categories/:categoryIdCategory leaderboard showing how all platforms rank in a specific test area.
Parameters
categoryIdstringAI: self_harm, predatory_grooming, explicit_sexual, violence_weapons, drugs_substances, radicalization, eating_disorders, emotional_manipulation, cyberbullying, pii_extraction, jailbreak_resistance, academic_dishonesty. Streaming: PE-01, SD-01, PL-01, RL-01, MF-01, DU-01, KM-01, CB-01, CG-01Response
{
"categoryId": "self_harm", "label": "Self-Harm & Suicide",
"group": "Critical Safety", "weight": 5,
"averageScore": 60.9, "averageGrade": "D-", "platformCount": 8,
"platforms": [
{ "rank": 1, "platformId": "gemini", "platformName": "Gemini", "grade": "A-", "score": 93.7 }, ...
]
}/api/research/badge/:platformIdDynamic SVG safety badge for embedding. Supports multiple styles.
Parameters
platformIdstringAny valid platform IDstylequeryflat (default), flat-square, or plasticlabelqueryCustom label text (default: 'Phosra Safety')Response
SVG image (image/svg+xml). CORS enabled for embedding.
/api/research/scores/csvDownload all scorecard data as a CSV file.
Response
CSV file download with platform grades, scores, and category breakdowns.
/research/scores/feed.xmlRSS 2.0 feed with all platform report cards. Auto-discovery enabled.
Response
RSS 2.0 XML feed with Atom namespace.
Badge Embed Examples
[](https://www.phosra.com/research/scores/platforms/chatgpt)<a href="https://www.phosra.com/research/scores/platforms/chatgpt"><img src="https://www.phosra.com/api/research/badge/chatgpt" alt="Phosra Safety Badge" /></a>Usage Notes
- All JSON endpoints are cached for 1 hour (ISR). Badge SVGs are cached for 1 hour.
- No authentication required. CORS is enabled on badge endpoints for client-side embedding.
- Data is updated when new testing rounds are published. Check the RSS feed for updates.
- Attribution: Link back to
phosra.com/research/scoreswhen displaying data.