Why this matters
Half of the major AI vendors are now rated HIGH risk — but for very different reasons. OpenAI's risk is political entanglement. Anthropic's is political exclusion. xAI's is governance collapse. Your existing GRC tools don't track any of this.
What we track — 6 risk dimensions
- Government Relations — Direct government contracts, lobbying, political donations, regulatory capture
- Political Contagion — FUD spillover, media narrative risk, guilt-by-association effects
- Technical — Security incidents, model reliability, API stability, safety track record
- Business — Financial health, funding runway, governance, customer concentration
- Compliance — Data sovereignty, privacy law, AI regulation readiness (EU AI Act, CPS 230)
- Supply Chain — Infrastructure dependencies, compute concentration, vendor lock-in depth
Who this is for
- CISOs and security leads managing AI vendor assessments
- Compliance officers preparing for AI regulations
- Engineering leaders managing multi-model architectures
- Board members overseeing AI strategy
CPS 230: Your AI vendor is probably a material service provider
APRA's CPS 230 has been in effect since July 2025. Under Paragraph 49, any service provider you rely on for critical operations — including AI APIs — is a material service provider. Paragraph 50 goes further: "core technology services" are material by default unless you can justify otherwise.
If your organisation uses OpenAI, Anthropic, or Google AI in production, they almost certainly belong on your CPS 230 material service provider register. Our Risk Index maps directly to CPS 230 assessment requirements — political risk, operational resilience, service disruption probability, and compliance posture for each vendor.