
The Standards Gap — What to Do When Half the Rulebook Is Missing
You can't follow rules that don't exist yet. And that's actually fine.
Here's the situation: the EU AI Act passed. Deadlines are locked in. But only 15 of the 45 technical standards are published. Nearly half won't be ready when high-risk AI obligations take effect in August 2026.
This sounds terrifying. It's actually liberating.
Because you're not blocked. You're just freed from the illusion that someone has all the answers. Nobody does. Not consultants, not your competitors, not even the regulators who wrote the thing.
So what do you actually do? You focus on what's ready. You document what you're doing. And you don't pay anyone who promises certainty that doesn't exist.
The Standards Timeline Problem
The AI Act sets obligations. Technical standards tell you how to meet them.
But the standards process is slow. CEN-CENELEC (the standards body) is working through 45 standards. As of February 2026, only 15 are published. Some won't land until 2027 or later.
The catch-22: your high-risk AI obligations kick in August 2026, whether the standards are ready or not.
This feels like being told to take a test before the textbook arrives. And in a way, that's exactly what's happening.
What's Ready vs. What's Missing
Here's the current state of play:
Published and usable now:
- Risk management frameworks (ISO/IEC standards adapted)
- Data governance basics (quality, representativeness, bias checks)
- Transparency disclosure templates (what to tell users)
- Human oversight requirements (when humans need to stay in the loop)
- Record-keeping structures (what logs to maintain)
- Basic cybersecurity measures
Still in draft or missing:
- Specific conformity assessment procedures for novel high-risk systems
- Detailed accuracy measurement methodologies for certain use cases
- Provider-deployer information exchange formats
- Post-market monitoring reporting standards
- Some sector-specific technical specifications
The pattern: general process standards are ready. Hyper-specific technical specs are not.
What You Can Start Monday Morning
Don't wait. Here's what you can do right now, with zero dependency on missing standards:
1. Build your AI system inventory
List every AI tool you build or deploy. Include SaaS tools that use AI under the hood. You need to know what you're dealing with before you can classify it.
2. Run risk classification
Is this AI making important decisions about people? Hiring, firing, creditworthiness, access to services? That's likely high-risk. Everything else is probably transparency-only.
Most SMBs land in transparency territory. That's good news — the requirements are clearer.
3. Implement transparency disclosures (Article 50)
If you're using AI-generated content, chatbots, emotion recognition, or biometric categorization, you need to tell people. This applies even to low-risk systems.
The standard for this is ready. The language is clear. Start now.
4. Document your risk assessment process
Even if technical standards evolve, you need a record of how you thought about risks. Write down:
- What risks you identified
- How you evaluated them
- What mitigations you put in place
- Who made the decisions
This isn't busywork. If regulators come knocking in 2027, "we didn't know what to do" won't fly. "Here's our documented process from 2026" will.
5. Set up basic governance
Who's responsible for AI readiness in your company? Who reviews new AI deployments? Who handles user complaints?
You don't need a 47-person committee. You need a name, a process, and a paper trail.
What's Actually Blocked by Missing Standards
Here's what you genuinely can't finish yet:
Conformity assessment for novel high-risk systems
If you're building something new and high-risk (not buying off-the-shelf), the detailed procedures for getting your readiness check aren't finalized. You can prepare your documentation, but you can't complete the formal assessment.
Sector-specific technical requirements
Some industries (healthcare AI, legal AI) are waiting for more specific standards. You can follow the general framework, but detailed technical specs are still coming.
Standardized reporting formats
How you report to authorities, how providers share info with deployers — some of these formats are still in draft.
But notice what's NOT on this list: understanding your systems, documenting your risks, implementing transparency measures. None of that is blocked.
The Trap to Avoid
Consultants who promise complete AI Act readiness today are lying.
Not because they're malicious. Because the standards don't exist yet. Nobody can give you a stamped, final answer.
What you want is someone who helps you do what's ready, documents your process, and builds a system that adapts as standards arrive.
What you don't want is someone selling a $50,000 "full readiness package" that overpromises certainty.
Focus on Transparency First
If you're an SMB using AI (not building novel high-risk systems), your biggest obligation is transparency.
Article 50 is clear:
- AI-generated content? Disclose it.
- Chatbot? Tell users it's not human.
- Emotion recognition? Let people know.
- Biometric categorization? Disclosure required.
These requirements affect way more companies than high-risk obligations. And the standards for transparency are ready.
Start there. Get your disclosures right. Document it. Then work backwards to risk classification.
What Monday Morning Looks Like
Here's your actual action plan:
This week:
- List every AI system you use or build (1 hour)
- Draft transparency disclosures for customer-facing AI (2 hours)
- Assign one person as AI readiness owner (5 minutes)
This month:
- Classify systems by risk level using Annex III (2 hours)
- Document your risk assessment process (4 hours)
- Set up a log for AI-related decisions and changes (1 hour)
This quarter:
- Build templates for ongoing documentation (1 day)
- Train relevant teams on transparency requirements (half day)
- Review vendor contracts for AI tools you deploy (1 day)
None of this requires standards that don't exist yet. None of this is wasted if standards change. All of it demonstrates good-faith readiness.
The Honest Truth
You can't be "AI Act compliant" in February 2026 because half the rulebook isn't written yet.
But you can be ready. You can show you're taking it seriously. You can document what you're doing and why.
That's not a loophole. That's what regulators expect from reasonable companies facing an impossible timeline.
The standards will arrive. Your job is to be ready to adapt when they do — and to prove you weren't sitting on your hands in the meantime.
Download our Standards Status Cheat Sheet and focus on what's ready. We track which standards are published, which are in draft, and which obligations you can tackle today.
This document supports readiness preparation. It does not constitute legal advice.
Ready to find out if this applies to you?
The AI Act assessment takes 3 minutes. No signup. You'll see your classification instantly.
Take the assessmentOr stay in the loop
Get updates when rules change. No spam.