top of page

From Risk to Revenue: The SaaS Leader's Playbook for the EU AI Act and ISO 42001

Updated: Oct 21

Introduction: Lead With Trust or Get Left Behind

ree

It is July 2025. The EU AI Act is no longer a future headline. It is a live regulation with full enforcement powers. The UK is moving fast too, with the AI Safety Institute pushing new rules and global standards.


That AI-powered feature in your SaaS product, the one that ranks applicants, personalises content, or flags suspicious behaviour, is no longer just a nice-to-have. It is regulated. And you are responsible.


Most founders still see AI compliance as a burden. The smart ones see it as an edge. This guide is your playbook. We offer a four-level maturity model to help you go from exposed to enterprise-ready.


Timeline: What Happens When

Here is a clear view of how the EU AI Act rolls out:

  • 21 July 2024: EU AI Act enters into force.

  • 2 February 2025: Rules for prohibited AI systems apply. This includes AI that manipulates behaviour, uses real-time biometric identification in public spaces (outside law enforcement), and systems that exploit vulnerabilities.

  • 2 August 2025: Rules apply to general-purpose AI (GPAI), including foundation models and LLMs like GPT or Claude.

  • 2 November 2026: Rules for high-risk AI systems come into force. Providers must conduct conformity assessments, register in the EU database, and maintain technical documentation.

  • 2 May 2027: Obligations for users and deployers of high-risk systems take effect, including post-market monitoring and incident reporting.


As of July 2025, the rules for prohibited AI systems and general-purpose AI are already in effect. If your SaaS uses an LLM, you are operating within the compliance window right now.

A visual Timeline fo EU AI Act

Why Your SaaS is Now a 'High-Risk AI System'

Many SaaS teams believe that using a third-party API like OpenAI or Anthropic means the risk is not theirs. This is incorrect.


The EU AI Act focuses on how AI is used. If your product automates decisions in sensitive areas, you are the 'provider' in legal terms. You are liable.


This includes:

  • Screening CVs or job applicants

  • Scoring users for loans or insurance

  • Recommending products, services, or content

  • Moderating user-generated content


If you do this, your system may be classified as a 'high-risk AI system' under Annex III of the Act. That means you must be able to produce on demand:

  • A full technical file proving how the system works.

  • A documented risk management system updated throughout the model's lifecycle.

  • Immutable logs to trace and explain specific outcomes.

  • Evidence of human oversight and clear appeal mechanisms.


Fines can reach €35 million or 7% of global turnover. Non-compliance can also mean removal from the EU market.


The Three Pillars of AI Compliance

To meet the law, you need a structured approach. There are three key areas:


Pillar 1: Governance and Documentation

  • Maintain a model register

  • Document purpose, capability, and limits

  • Keep an audit trail of data sources and processing


Pillar 2: Risk Management and Testing

  • Identify and score risks tied to your AI use

  • Test for bias and unintended outcomes

  • Track performance and flag drift


Pillar 3: Transparency and Human Oversight

  • Let users know when they interact with AI

  • Provide reasons for decisions

  • Allow human review and appeals


ISO 42001 is the standard for building this kind of system. Aligning with it helps you meet EU AI Act, GDPR, and UK requirements.


A Maturity Model for AI Compliance in SaaS

We define four levels of maturity:


Level 1: Naive Automation

  • Mindset: "We just use an API. It's not our problem."

  • No model tracking or documentation

  • No ownership of AI risk


Level 2: Compliance-Aware

  • Some records exist

  • Risk discussed but not embedded in workflows

  • Policies are drafted but not used


Level 3: Governed AI

  • Mindset: "We manage model risk like security risk."

  • Roles are clear

  • Bias testing and logging in place

  • Risk scoring applied to each model


Level 4: Trust-Centric AI

  • Mindset: "Our ISO 42001 alignment helps us win deals."

  • Full AI system of record

  • Live dashboards for risk and audit

  • Ready for regulators and enterprise clients


Case Studies: What Non-Compliance Looks Like

Case Study 1: TalentMatch - Automated Hiring at Scale

Background: A SaaS company provides AI-driven candidate screening tools for recruitment platforms. They use a proprietary model to score CVs and shortlist candidates based on 'fit' scores.


Workflow:

  • Uploads CVs via frontend portal

  • AI model ranks applicants on internal criteria

  • Platform suggests top 5 candidates to hiring manager


Compliance Gaps:

  • No documentation on what inputs affect the score

  • No audit trail for how recommendations are made

  • No human appeal mechanism


Regulatory Risk: Classifies as a high-risk AI system under Annex III. Consequence: An inquiry from a national authority (like France's CNIL) could force them to pull the product from the market until they can produce a complete, compliant technical file, a process that could take months and cost key customers.


Case Study 2: CustodianAI - Content Moderation for Communities

Background: A customer success platform uses OpenAI's API to auto-flag offensive content in community forums and emails.


Workflow:

  • Every message is scanned through a moderation filter

  • Flagged content is hidden from view

  • Appeal process is manual and unclear


Compliance Gaps:

  • No visibility into how the third-party model makes moderation decisions

  • No risk scoring for false positives

  • No logs of past flags or user appeals


Regulatory Risk: Violates transparency and oversight requirements. Consequence: The company could face fines under the AI Act and GDPR for opaquely processing user data. This creates double jeopardy and reputational damage.


What Other Platforms Offer (And What They Miss)

Your existing GRC stack is essential, but it was built for a world before generative AI. These platforms are 'AI-blind'—they can audit your corporate security, but they cannot see inside your AI models. Here is a breakdown of what leading platforms offer and where the critical governance gap lies:


Vanta

Vanta recently launched an EU AI Act checklist and educational webinar series. It helps companies understand regulatory obligations but stops short of actual technical enforcement. It does not offer model-level tracking, risk scoring, or audit trails for AI systems.


Strengths:

  • Strong baseline GRC automation (SOC 2, ISO 27001)

  • Good onboarding support for general compliance teams


Gaps:

  • Lacks AI SBOM and model transparency logs

  • No integration with LLM prompts or MLOps pipelines

  • No native support for ISO 42001-specific controls


Holistic AI

Holistic AI focuses heavily on EU AI Act risk classification and offers a suite of tools for enterprises to manage AI risk. Their readiness dashboard and prohibited-system classifier are comprehensive but geared towards large enterprises.


Strengths:

  • Domain-specific risk analysis, including GPAI tracing

  • Visual dashboards and policy documentation templates


Gaps:

  • Not built for developer workflows or SaaS CI/CD

  • Expensive and complex for mid-stage companies


A-LIGN

A-LIGN offers structured audit readiness for EU AI Act and ISO/IEC 42001. It positions its services for companies seeking certification as part of a broader compliance strategy.


Strengths:

  • Recognised certification partner with ISO 42001 accreditation

  • Strong background in audit evidence preparation and control evaluation


Gaps:

  • Service-heavy and slower to adapt to agile software environments

  • Not developer-oriented; lacks runtime or model-level visibility tools


Comp AI

A growing open-source GRC platform, Comp AI positions itself as a lightweight Vanta/Drata alternative. It supports SOC 2, ISO 27001, and GDPR automation, with Git-based evidence collection.


Strengths:

  • Developer-friendly, open-source architecture

  • Fast onboarding for general compliance readiness


Gaps:

  • No AI model-specific tooling

  • No native support for EU AI Act annex documentation or ISO 42001 mapping


Sprinto focuses on fast compliance for scale-ups. While strong on policy automation and audits, its content around AI governance is mostly educational and aspirational.


Strengths:

  • Simple UI for evidence collection

  • Active support for general compliance automation


Gaps:

  • Lacks AI system register, explainability tooling, and transparency logs

  • No active roadmap for AI-specific maturity tooling


Zerberus: Your Automated AI System of Record

Zerberus was built to see what other GRC platforms cannot. We bridge the gap between compliance policy and your actual code.


Automated Governance, Not Spreadsheets: Zerberus connects to your CI/CD pipeline to create a living model register and generate Annex IV-ready documentation automatically.


Quantifiable Risk Management, Not Guesswork: Assign risk scores to every model, log bias tests in real time, and map all findings to ISO 42001 controls for proactive risk posture.


A Unified Dashboard, Not Siloed Oversight: Our central dashboard gives your compliance, security, and sales teams a single source of truth. Demonstrate the fairness, safety, and audit-readiness of your AI in one place.


Zerberus helps you move from reactive compliance to proactive trust-building, creating competitive advantage with every release.


Conclusion: Lead, Do Not Follow

By mid-2025, the market has split in two. On one side are SaaS companies reacting to compliance as a crisis. On the other are leaders using it as a growth lever.


If you want to close enterprise deals, reduce legal risk, and build long-term trust, you must act now. Zerberus helps you do it without adding more overhead.


Ready to turn AI compliance into a strategic advantage?

Book a 15-Minute Demo to see how Zerberus works.


References, Data Sources and Further Reading

bottom of page