EU AI ActCompliance

The EU AI Act's August 2, 2026 deadline: a mid-market readiness guide.

On 2 August 2026 — 90 days from today — the European AI Act's high-risk system requirements become enforceable. Fines run up to €35 million or 7% of global annual turnover. We've been in mid-market boardrooms across Portugal, the Netherlands, and Germany this past month. Most aren't ready — and not for the reasons their lawyers think.

01

What's actually happening on August 2, 2026

On 2 August 2026, the EU AI Act's full enforcement powers activate across all 27 member states. National authorities can begin imposing fines and corrective actions on high-risk AI systems — those defined under Annex III of the Act.

The Act's penalty structure has three tiers:

  • €35 million or 7% of global turnover — for prohibited AI practices (those banned outright since February 2025)
  • €15 million or 3% of turnover — for non-compliance with high-risk system requirements
  • €7.5 million or 1% of turnover — for providing incorrect or misleading information to authorities
02

The mid-market trap: why most firms underestimate their exposure

The most common mistake we see in mid-market boardrooms is the assumption that "high-risk AI" means something exotic — a custom-trained model, a generative system, a research-grade tool. That's not what Annex III says.

Annex III high-risk AI systems include AI used in employment decisions, education and vocational training, essential private services like creditworthiness scoring, essential public services, law enforcement, migration and border control, administration of justice, and critical infrastructure management.

If your HR team uses any AI to screen CVs — that's Annex III. If your customer service platform routes inquiries based on customer profile prediction — that's Annex III. If your insurance partner runs credit scoring on your customers — you may share deployer responsibility.

Most mid-market firms have AI exposure in at least one of these categories. Many have it in several. "We don't have high-risk AI" is wrong for roughly 60% of the mid-market firms we audit.

03

Three things we keep hearing in mid-market boardrooms — and why they're wrong

"We don't have high-risk AI." True for fewer mid-market firms than executives think. The right question isn't "do we build AI?" — it's "where does AI affect our decisions about people?"

"We'll wait until Brussels confirms the timeline." Even if the Council ratifies the Parliament's delay vote, your insurers, partners, and large enterprise customers will operate on the original August 2026 timeline. Their procurement and risk teams have already updated their vendor due diligence questionnaires. For the people who matter to your revenue, the date is fixed.

"Our IT team has it covered." AI Act compliance is not an IT problem. It's a documentation, governance, and accountability problem — sitting between Legal, Risk, Operations, and HR. IT alone cannot produce the deployer evidence regulators will ask for.

04

What documentation you actually need: the deployer evidence checklist

Based on the IAPP framing and what national authorities have signalled they will look for, here is the deployer evidence baseline every mid-market firm should have by July 2026.

  1. AI Inventory — a documented enumeration of every AI system in use across the organisation. Includes vendor, deployment date, business function, data inputs, decision outputs. Updated monthly.
  2. Risk Classification — for each system in the inventory, classification against Annex III categories. Most are low-risk; the high-risk ones are where compliance effort concentrates.
  3. Human Oversight Protocols — for each high-risk system, a documented procedure for how a human reviews, validates, or overrides AI outputs. Names the responsible role and the escalation path.
  4. Vendor Due Diligence — for each third-party AI tool, documentation that the vendor provides Article 13 transparency information.
  5. Accountability Assignment — a named individual, by role, accountable for each high-risk AI system. Not a committee. A person.
  6. Audit Trail — records of decisions made with AI assistance, sufficient to reconstruct what the AI suggested, what the human did, and why.
05

A 90-day readiness checklist

If you start today, here's what we recommend in order. This sequence is what we deliver in our AI Strategy Days leadership intensive — compressed for mid-market boards moving fast.

  1. Weeks 1–2 (Inventory): Survey department heads, audit corporate accounts, survey shadow AI, compile a single inventory spreadsheet.
  2. Weeks 3–4 (Classification): Map each tool to Annex III categories using the EU's official classifier. Mark prohibited / high-risk / limited-risk / minimal-risk.
  3. Weeks 5–8 (Documentation): Build the six-item deployer evidence baseline for each high-risk system. Assign accountability to named individuals. Document human oversight protocols.
  4. Weeks 9–12 (Audit & readiness review): Outside review of documentation. Test oversight protocols in a real workflow. Brief the board on residual risk. Build the quarterly review rhythm.
06

Five common mistakes mid-market firms make on AI Act compliance

From the engagements we run, these are the most common compliance mistakes that show up before August.

  1. Treating it as GDPR Round 2. GDPR was about data; AI Act is about decisions. Don't reuse a GDPR template and assume it covers you.
  2. Letting Legal own it solo. Legal can interpret. Operations has to document. HR has to assign accountability. If Legal owns it alone, the documentation will be incomplete.
  3. Ignoring shadow AI. Employees using ChatGPT, Claude, or Gemini on personal accounts with company data — that use is yours under the Act, whether IT approved it or not.
  4. Skipping vendor due diligence. If your vendor can't provide Article 13 transparency documentation, that's your problem and a procurement red flag.
  5. Waiting for "perfect" before documenting. Documentation can be revised; lack of documentation cannot be retroactively fixed. Start with imperfect evidence in writing.
07

NTA's read: what we'd do if we were you

We tell mid-market companies when AI is the wrong answer. We also tell them when "wait and see" is the wrong answer. This is the second kind of moment.

The companies we're working with this quarter aren't waiting for legal certainty — they're moving on operational certainty. The 2 August timeline is the one their customers, insurers, and boards are using. That's the timeline that matters.

If your firm has under 50 active AI tools, the inventory and classification work is achievable in two weeks of dedicated focus. If you're over that threshold, this is where our AI Integration & Adoption Programme becomes useful — twelve weeks to operational readiness across the organisation, not just on paper.

The honest conversation we have with most boards: this isn't about compliance theatre. It's about whether your AI use is governed enough to scale. The August deadline is the forcing function. The real prize is the operating discipline that comes from doing the work.

08

Frequently asked questions

Does the EU AI Act apply to companies outside the EU?

Yes, in most cases. The Act applies extraterritorially to any provider or deployer whose AI system's output is used in the EU — regardless of where the company is headquartered.

What's the difference between a provider and a deployer?

A provider develops or markets an AI system. A deployer uses it in a professional context. Most mid-market firms are deployers. Some are both.

What if our AI is purely internal — no customer-facing decisions?

Internal use can still be high-risk if it affects employees (employment decisions, performance monitoring, promotion algorithms). HR-tech AI is one of the most commonly missed Annex III categories.

Can we just stop using AI to avoid compliance?

Most mid-market firms can't. The question is whether you've documented your existing use enough to keep operating. The cost of compliance is far less than the cost of stopping AI usage that's already producing value.

What if the Parliament's delay vote passes?

The deadlines shift, but the operational pressure doesn't. Your customers, insurers, and large enterprise partners will continue operating on the August 2026 timeline. The market doesn't wait for the regulator.

How much does AI Act compliance cost a mid-market firm?

For most mid-market firms with under 50 AI tools and no high-risk systems, the inventory and documentation work is roughly 80–120 hours of internal time. For firms with high-risk systems requiring formal oversight, factor in legal review and ongoing audit costs — typically €15,000–€80,000 in the first year, depending on system complexity.

What this means for your week

If your AI Act readiness needs a third opinion before August, we'll tell you whether you're behind, on track, or overdoing it.

Send us a message