Frequently Asked Questions

AI Compliance: What Every Business Needs to Know

Straight answers to the questions we hear most — from why governance matters now, to what regulations apply to you, to how to take the first step.

Why It Matters Regulations & Mandates Timelines & Deadlines Does It Apply to Me? Getting Started About This Tool
🌐

Why AI Compliance Matters

The business case — beyond the legal obligation

That perception is changing fast. AI governance is now directly tied to business risk, not just regulatory overhead. When an AI system makes a flawed hiring decision, a biased credit assessment, or a harmful clinical recommendation, the organisation behind it faces reputational damage, legal liability, and customer attrition — regardless of its size.

Businesses that establish governance early benefit in three concrete ways: they can demonstrate trustworthiness to customers and partners who are actively asking about AI practices; they reduce the cost of remediating problems discovered late; and they avoid the scramble of retrofitting controls under regulatory deadline pressure.
Rule of thumb: Governance controls cost roughly 5–10× less to build in than to bolt on after the fact.

Possibly yes — the scope of what counts as "regulated AI" is broader than most teams expect. A customer-facing recommendation engine that influences purchasing decisions, a chatbot used in hiring workflows, or a content moderation system all fall into categories that regulators in the EU, US, and elsewhere consider consequential.

The right question isn't "is our system sophisticated?" — it's "does our system influence outcomes that matter to people?" If yes, governance obligations are likely to follow, and demonstrating responsible use is increasingly a commercial expectation from enterprise customers conducting vendor due diligence.

No — and this is one of the most common misconceptions. Under every major AI regulation, the organisation that deploys AI in its products or services carries significant compliance obligations, regardless of whether it built the underlying model.

The EU AI Act, for example, explicitly distinguishes between AI "providers" (model builders) and "deployers" (businesses that integrate models into applications), and places meaningful obligations on both. As a deployer you are responsible for: ensuring the system is fit for purpose in your context, conducting required impact or conformity assessments, establishing human oversight mechanisms, monitoring for post-deployment risks, and documenting how the system makes decisions.
Procurement note: Your vendor's compliance certification covers their model — not your deployment. You need your own assessment.

The risk landscape spans four categories:

1. Regulatory fines. The EU AI Act allows fines up to €35 million or 7% of global annual turnover for prohibited AI practices, and €15 million or 3% for other violations. US state laws are earlier-stage but trending similarly.

2. Market access. Non-compliant AI products can be blocked from operating in the EU. As compliance certification spreads, enterprise procurement teams and public sector contracts increasingly require it.

3. Litigation exposure. Bias, privacy violations, or harm caused by AI systems are increasingly actionable under existing tort, consumer protection, and anti-discrimination law — even before specific AI laws take effect.

4. Trust damage. High-profile AI failures attract media and regulator attention that can be disproportionate to the underlying technical issue. The reputational cost of a publicised bias incident often exceeds any fine.

Vendor compliance claims deserve scrutiny. A few things to check:

"Compliant" with what, exactly? Many vendors self-certify against a loose interpretation of a framework without independent verification. Ask for the specific regulation or standard, the scope of the assessment, and who conducted it.

Their compliance ≠ your compliance. Even a genuinely compliant AI product can be deployed in a non-compliant way. Your obligations cover how you integrate, configure, monitor, and use the system — none of which the vendor controls or certifies.

Contracts and indemnity. If a regulator investigates your use of an AI system, your vendor's compliance posture is largely irrelevant to your liability. You need your own documented assessment.
Good vendor due diligence is step one. Your own deployment assessment is step two. Both are required.

AI governance and data privacy are closely related but not the same. Privacy programmes typically cover how personal data is collected, stored, processed, and shared. AI governance extends that into how models are built on that data, what decisions they drive, and what safeguards exist around automated decision-making.

Where they overlap: GDPR Article 22 restricts automated decision-making with legal or significant effects — directly implicating AI systems. GDPR's data minimisation and purpose limitation principles apply to training data. Data subject rights (access, erasure) become complex when personal data is embedded in a model.

Where AI governance goes further: Bias testing, model performance monitoring, transparency about how decisions are made, human oversight mechanisms, and ongoing risk assessments are not typically covered by privacy frameworks alone.

If you already have a mature GDPR programme, you have a strong foundation — AI governance builds on top of it, extending controls from data handling to model behaviour and decision outputs.
⚖️

Regulations & Mandates

What laws exist, what they require, and which are mandatory vs. voluntary

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive, binding AI law. It classifies AI systems by risk — from unacceptable (prohibited) to high-risk (strict controls required) to limited and minimal risk — and assigns obligations based on that classification.

It applies to your business regardless of where you're based if you: place AI systems on the EU market, deploy AI systems to users in the EU, or operate AI systems that produce outputs used in the EU. This extraterritorial scope is intentional and mirrors the GDPR approach.

Key requirements for high-risk AI systems include: conformity assessments, technical documentation, risk management systems, data governance, human oversight, transparency, and post-market monitoring.
If you have EU customers, EU employees, or your AI outputs reach EU residents, you are almost certainly in scope.

The NIST AI RMF 1.0 is a voluntary framework published by the US National Institute of Standards and Technology. It is structured around four functions: GOVERN (culture and accountability), MAP (context and risk identification), MEASURE (risk analysis and evaluation), and MANAGE (risk treatment and monitoring).

While not legally required at the federal level, it carries substantial practical weight:
  • US federal agencies are directed to consider it in AI procurement
  • It is increasingly referenced in state-level AI legislation as a compliance pathway
  • Enterprise and government clients frequently list NIST AI RMF alignment in vendor requirements
  • It maps well to the EU AI Act's technical requirements, making it a strong starting point for multi-jurisdiction readiness
In practice, "voluntary" means no fine for ignoring it — but it also means no competitive advantage for businesses that don't.

In the absence of comprehensive federal AI legislation, US states have begun passing their own laws:

Colorado AI Act (SB 24-205) — Signed into law, effective February 1, 2026. Covers "high-risk" AI systems used in consequential decisions such as employment, housing, education, healthcare, and financial services. Requires impact assessments, bias testing, consumer disclosures, and remediation processes.

New York AI Act (A8884) — Proposed legislation modelled closely on the EU AI Act's risk-based approach. Targets AI systems making consequential decisions in New York and would require pre-deployment risk assessments and bias audits.

Other states: California, Texas, Virginia, Illinois, and others have passed or proposed AI bills covering specific domains (employment, insurance, healthcare). The patchwork is expanding rapidly.
Operating nationally? You may need to satisfy the requirements of multiple state laws simultaneously.

ISO/IEC 42001:2023 is the international standard for AI Management Systems (AIMS). Published in December 2023, it follows the same structure as ISO 27001 (information security) and ISO 9001 (quality management), making it straightforward to integrate if you already hold those certifications.

Certification is voluntary but commercially significant. Organisations pursue it to:
  • Provide third-party verified evidence of responsible AI governance to clients and regulators
  • Meet procurement requirements from enterprise customers or government contracts
  • Demonstrate a structured, repeatable approach to AI risk — rather than ad hoc controls
  • Position ahead of competitors who have not yet formalised governance
ISO 42001 also maps well to the EU AI Act's documentation and risk management requirements, so organisations targeting both often use 42001 as a foundation.

Not yet — though the industry is converging. ISO/IEC 42001 is the closest thing to a global baseline that maps to multiple regional regulations. It was deliberately designed to be compatible with the EU AI Act's requirements for AI management systems, and NIST has published alignment mappings between the AI RMF and ISO 42001.

A practical multi-jurisdiction strategy used by many organisations:
  1. Build your AI risk management programme on NIST AI RMF as a comprehensive, risk-based foundation
  2. Formalise it through ISO 42001 certification for third-party verification and commercial credibility
  3. Layer on jurisdiction-specific requirements (EU AI Act, state laws) as they apply — most map well to steps 1 and 2
This approach avoids duplicating governance work across frameworks and gives you a defensible posture for most regulatory scenarios your business is likely to face.

The EU AI Act has a specific chapter for General-Purpose AI (GPAI) models — those trained on broad data that can perform a wide range of tasks. Key obligations for GPAI model providers (effective August 2025) include:
  • Technical documentation requirements
  • Compliance with EU copyright law regarding training data
  • Publishing summaries of training data
  • Additional obligations for "systemic risk" models (above a compute threshold)
For businesses using GPAI via API: You are a deployer, not a GPAI provider. Your obligations focus on the specific application you build — particularly if it's used in a high-risk context. The GPAI provider's compliance with their obligations does not substitute for your deployment-level assessment.
📅

Timelines & Enforcement Deadlines

What's already in force, what's coming, and when you need to act

The EU AI Act entered into force on August 1, 2024, with a phased implementation schedule:
Aug 2024
In force. Prohibited AI systems banned.
Aug 2025
GPAI model obligations apply. Codes of practice.
Aug 2026
Full high-risk AI obligations enforced.
2027
High-risk embedded systems deadline.

For most businesses building or deploying high-risk AI, August 2026 is the critical deadline. That gives compliance teams roughly 18 months from now to complete risk assessments, build governance controls, and prepare technical documentation — which is tight if you're starting from zero.

Enforcement is already underway in related areas and the trajectory is clear. The EU has been aggressively enforcing GDPR since 2018 — fines have totalled billions of euros. EU AI regulators have stated explicitly that AI Act enforcement will be similarly rigorous, and national market surveillance authorities have been given significant powers.

In the US, the FTC has taken enforcement actions under existing consumer protection law against AI-powered products making deceptive claims. The EEOC has issued guidance on AI in employment. State AGs in Colorado and Illinois have already brought cases under algorithmic bias laws.

The pattern from GDPR is instructive: early-mover companies that had documented compliance programmes faced far smaller fines when incidents occurred, because they could demonstrate good-faith efforts. Companies that had done nothing faced both the maximum fines and the worst press coverage.
Documented governance is your best protection — even before enforcement is routine.

It depends on your trajectory, but "later" often becomes "never" without intention — and then it becomes "crisis." Three scenarios where early action pays off disproportionately for startups:

Enterprise sales: B2B enterprise customers — particularly in financial services, healthcare, and government — now include AI governance questions in vendor security questionnaires. Having a documented posture can be a dealmaker; not having one is increasingly a dealbreaker.

Fundraising: Sophisticated investors and VCs, especially those with European LPs or portfolio companies, are beginning to treat AI governance as part of technical due diligence.

Acquisition readiness: M&A technical due diligence now covers AI risk. Undocumented AI practices can introduce deal friction or reduce valuation, particularly if the buyer is a regulated entity.

A lightweight initial assessment — even just understanding where your gaps are — costs very little and builds a foundation you can grow.

Missing a deadline does not automatically trigger a fine — regulators generally need to identify a violation, investigate, and make a determination. But it dramatically increases your exposure in several ways:

Reactive rather than proactive remediation. Companies scrambling to comply after a regulator enquiry face far higher costs — emergency consultants, legal fees, rushed system changes — than those who built controls deliberately in advance.

Reduced mitigating factors. Regulators across GDPR, FTC enforcement, and the EU AI Act have all indicated that documented good-faith compliance efforts reduce penalties. Missing a deadline removes this defence.

Commercial consequences arrive first. Before a regulatory fine, you're more likely to lose a customer deal, fail a vendor security questionnaire, or be publicly called out by a competitor. These consequences are not bounded by the same procedural safeguards as formal enforcement.

The practical advice: even a partial, documented programme started now is materially better than nothing.
🎯

Does This Apply to Me?

Scoping questions — figuring out which rules affect your specific situation

The EU AI Act defines high-risk AI by two criteria: it must be safety-critical infrastructure or fall within specific sectors listed in Annex III. The Annex III list includes:
  • Biometric identification and categorisation
  • Critical infrastructure management (energy, water, transport)
  • Educational and vocational training access or assessment
  • Employment, HR, and workforce management
  • Access to essential private services (credit scoring, insurance)
  • Law enforcement and border control
  • Administration of justice and democratic processes
  • Healthcare and emergency services
If your AI system makes or materially influences decisions in any of these areas, it is likely high-risk. General-purpose AI tools (chatbots, image generators) used in these contexts by deployers can also be re-classified as high-risk based on use.

Internal use narrows — but does not eliminate — your obligations. Key considerations:

HR and workforce applications: AI used for recruiting, performance evaluation, task allocation, or termination decisions falls under the EU AI Act's Annex III high-risk list and the Colorado AI Act, even if employees never see the system directly.

Internal decision support: AI tools that inform consequential decisions — credit approvals, insurance underwriting, eligibility determinations — are typically in scope regardless of whether the end user is internal or external.

Procurement chain: If you supply an enterprise customer who is subject to AI regulation, they may contractually require compliance evidence from you as part of their own obligations.

Possibly more than you'd expect. The EU AI Act applies if your AI system's outputs are used in the EU — not just if you have a legal entity or office there. If any of your customers, users, or the people affected by your AI's decisions are in the EU, you are likely in scope.

For a purely domestic US-only operation, the EU AI Act may not apply directly. But US state laws might — Colorado's AI Act applies to businesses that deploy high-risk AI affecting Colorado residents, again regardless of where the company is based.

Additionally, even without a current legal obligation, voluntary alignment with NIST AI RMF or ISO 42001 is increasingly a competitive differentiator in B2B markets where customers run their own AI vendor risk programmes.

The EU AI Act's most stringent requirements do apply primarily to high-risk systems, and minimal-risk AI (image filters, spam detection, recommendation engines in entertainment) has very few mandatory obligations. But "low regulatory risk" is not the same as "no governance needed."

Risk can escalate. A product that starts as a low-risk recommendation engine can be repurposed or extended into high-risk use cases. Governance infrastructure built early means you're ready.

Customer expectations apply regardless. Enterprise B2B customers don't segment their vendor risk questionnaires by AI risk tier. They want to see documented governance regardless of where your system lands on the regulatory classification scale.

Voluntary codes of practice. For limited-risk AI, the EU AI Act encourages voluntary codes of conduct — and participating demonstrates a level of responsibility that differentiates you in the market. A lightweight NIST AI RMF assessment is a cost-effective way to establish that credibility.

Yes, meaningfully so. The EU AI Act distinguishes between:

Providers (developers who build and place AI on the market): carry the heaviest obligations — conformity assessments, technical documentation, CE marking for high-risk AI, registration in the EU AI database, post-market monitoring, incident reporting.

Deployers (organisations that use AI in their products or services): required to conduct use-case-specific risk assessments, ensure appropriate human oversight, monitor for emerging risks, and tell users when AI is making decisions that affect them.

Importers and distributors also have obligations if they place AI systems on the EU market.

If you build AI and sell it — you are a provider. If you build on top of others' models to create products — you are primarily a deployer, though you become a provider for the combined system you ship. Many organisations are both simultaneously.
This assessment tool covers both roles. The intake form asks about your position in the AI supply chain and adjusts questions accordingly.
🚀

Getting Started

Practical first steps for building an AI governance programme

The highest-leverage first action is an inventory and risk assessment: understand what AI systems you operate, where they make or influence decisions, and who is affected. Without this, governance work is untargeted.

A practical starting sequence:
  1. Inventory: List every AI or ML system in production — including third-party APIs used in your products.
  2. Classify: For each, identify the decision domain, who is affected, and a preliminary risk tier.
  3. Assess: Run a structured compliance assessment against the regulation most relevant to your business.
  4. Prioritise: Focus initial remediation on high-risk systems with the largest gaps.
  5. Establish ownership: Assign clear accountability for AI governance — typically a cross-functional group including legal, product, and engineering.
A structured assessment tool like this one can compress steps 3 and 4 from weeks to hours by guiding you through the right questions with immediate scored output.

The right starting point depends on your primary driver:

If you operate in or sell to the EU: Start with the EU AI Act assessment to identify mandatory obligations and hard deadlines. Then layer in NIST AI RMF for a more comprehensive risk management foundation.

If you're building a governance programme from scratch (no immediate regulatory deadline): Start with NIST AI RMF. It's the most comprehensive framework for internal risk management and maps well to other regulations, making it the best foundation for multi-framework readiness.

If you're pursuing certification for commercial reasons: ISO 42001 is the path — it's the only framework with an independent certification scheme. Use NIST AI RMF to prepare for it.

If you're in a US state with specific obligations: Run the relevant state law assessment in parallel with NIST AI RMF, as requirements overlap significantly.
Good news: work done for one framework transfers heavily. NIST AI RMF coverage typically satisfies 60–70% of EU AI Act technical requirements.

It depends on the maturity of your documentation and how many people you need to consult. With this tool, the assessment questionnaire itself typically takes 1–3 hours per framework for a single AI system, once you have the right people in the room (usually product owner + legal + a technical lead).

Common time sinks to anticipate:
  • Locating documentation — Risk registers, data lineage records, and model cards are often undocumented and take time to reconstruct.
  • Stakeholder alignment — Legal and engineering teams frequently disagree on risk categorisation; build in consensus time.
  • Remediation planning — The assessment itself is fast; the work to close gaps is where calendar time accumulates.
Starting now and completing a baseline assessment is the right move — it establishes a baseline, surfaces the largest gaps, and creates a documented record that demonstrates good-faith effort.

There is no single right answer, but the most effective structures we see share a common pattern: distributed accountability with a designated coordinator.

Ownership is typically split across:
  • Legal / Compliance: Regulatory scoping, documentation, external reporting obligations
  • Product / Engineering: Technical controls, model monitoring, data governance implementation
  • Risk / Audit: Internal assessments, gap tracking, board-level reporting
  • HR / Ethics: Bias testing, employee impact assessments, training
The coordinator role — sometimes a Chief AI Officer, sometimes a VP of Trust and Safety, sometimes a senior compliance manager — ensures these threads stay connected and progress is tracked.

For smaller organisations, a cross-functional working group meeting monthly, with a legal lead and a tech lead, is often sufficient to start.

Documentation requirements vary by framework and risk tier, but the core set that appears across nearly every regulation:

AI system inventory — A register of all AI systems in use, their purpose, risk classification, and ownership.

Risk assessment records — For each system: identified risks, likelihood, impact, and how they are being managed. Must be kept current.

Technical documentation — Model cards or equivalent: training data sources, known limitations, performance metrics, bias testing results, intended and prohibited use cases.

Data governance documentation — How training and inference data is sourced, labelled, stored, and managed.

Human oversight procedures — How humans can intervene in, override, or shut down AI systems, and under what circumstances.

Incident log — Records of AI failures, unexpected outputs, near-misses, and how they were resolved.

Post-market monitoring plan — How the system is monitored in production for performance degradation, drift, or emerging risks.
Running an assessment with this tool generates a gap analysis that tells you exactly which of these you're missing and in what priority order to address them.
🛠️

About This Assessment Tool

How it works, what it produces, and what it's not

This is a structured, questionnaire-driven compliance assessment platform. You describe your AI system and select the regulatory framework you need to assess against. The tool then guides you through a pillar-by-pillar questionnaire — covering governance, risk management, technical safeguards, operations, and monitoring.

At the end you receive:
  • A percentage compliance score and maturity level for each governance pillar
  • Automatic risk flags for critical gaps identified by your answers
  • A prioritised remediation roadmap categorised by urgency (0–30 days, 30–90 days, 90+ days)
  • A downloadable PDF report suitable for sharing with leadership, legal counsel, auditors, or regulators
Assessments are saved to your account so you can return, update answers, and track progress over time as your programme matures.

No — and it's important to be clear about this. This tool is a preparatory aid, not a legal certification.

What it gives you is a structured, scored baseline of your current posture — identifying where you likely have gaps and what to prioritise. That baseline is genuinely valuable: it guides remediation effort, documents your good-faith assessment process, and gives you a clear picture before engaging external auditors or legal counsel.

Actual legal compliance for mandatory regulations (EU AI Act, Colorado AI Act) requires conformity assessments by notified bodies (for certain high-risk systems), legal review of your specific situation, and implementation of the controls identified as gaps. This tool accelerates and structures that process — it doesn't replace it.
Nothing in this tool constitutes legal advice. Always consult qualified legal counsel for your specific compliance obligations.

Yes. Your assessment data is private to your account and is never shared with third parties, used to train AI models, or disclosed in aggregate reports. Each account's data is isolated. All data is stored on encrypted infrastructure.

Because compliance assessments contain sensitive information about your AI systems' capabilities and gaps, we treat this data with the same care as any confidential business documentation. See our Privacy Policy for full details.

Yes to both. You can create multiple assessments — one per AI system — and run each against any of the 5 supported frameworks independently. All your assessments are accessible from your dashboard, so you can maintain a portfolio view across your organisation's AI systems.

If you need to assess the same system against multiple frameworks (e.g., both NIST AI RMF and the EU AI Act), simply create two assessments for the same system, selecting a different framework for each. Many of the governance controls overlap, so findings from one often inform the other.

Each question has multiple answer choices weighted by tier — Critical, High, Medium, and Standard. Your score for each question is the percentage of available points earned based on what you select. Questions answered N/A are excluded entirely from the calculation (they don't drag your score down or inflate it).

Your pillar score is the average of all answered question scores within that pillar. Each pillar maps to a maturity band:
  • 0–20%: Non-Compliant
  • 21–40%: Initial
  • 41–60%: Developing
  • 61–80%: Established
  • 81–95%: Advanced
  • 96–100%: Optimising
Your overall score is a weighted average of pillar scores, where each pillar's weight reflects its relative importance in the regulation (e.g., in NIST AI RMF, GOVERN is weighted at 30%).

Scores are based on your answers and update in real time as you complete more of the questionnaire.

Yes. Once you've completed an assessment, you can generate a professional PDF report from the report page. The PDF includes:
  • Assessment metadata (system name, framework, date, assessor)
  • Overall compliance score and rating (Compliant / Partially Compliant / Non-Compliant)
  • Per-pillar scores and maturity levels
  • All triggered compliance flags with severity ratings
  • A phased remediation roadmap: Immediate (0–30 days), Short-term (30–90 days), and Ongoing actions
  • Question-level evidence file attachments, if any were uploaded
The PDF is designed to be shared with leadership, legal teams, auditors, or customers as a structured record of your compliance assessment process.