EU AI Act from August 2026: What Enterprises with High-Risk AI Must Do Now

Editorial note: The information in this article was compiled to the best of our knowledge at the time of publication. Technical details, prices, versions, licensing terms, and external content may change. Please verify the information provided independently, particularly before making business-critical or security-related decisions. This article does not replace individual professional, legal, or tax advice.

AI Compliance with your own infrastructure — WZ-IT operates GPU servers and AI Cubes in German data centers: GDPR-compliant, audit-ready, EU AI Act ready. Schedule a free consultation
On August 2, 2026, the EU AI Act's requirements for high-risk AI systems become enforceable. At the same time, the EU Parliament voted in April 2026 to potentially delay the deadline to August 2027. The political uncertainty is real — but the regulatory obligations are not going away.
This guide analyzes the current status, explains the concrete requirements, and provides a checklist for enterprises to prepare — regardless of whether the deadline holds or gets pushed back.
Table of Contents
- Timeline: What applies when
- The delay debate
- What is a high-risk AI system?
- The seven obligations for high-risk AI
- Penalties and enforcement
- Practical checklist for enterprises
- On-premise AI as compliance advantage
- How WZ-IT handles this
- Related guides
Timeline: What applies when
The EU AI Act entered into force on August 1, 2024. Implementation is phased:
| Date | What applies |
|---|---|
| February 2, 2025 | Prohibition of AI systems with unacceptable risk (social scoring, workplace emotion recognition, predictive policing) |
| August 2, 2025 | Obligations for General-Purpose AI (GPAI) models, governance structures, notification of conformity assessment bodies |
| August 2, 2026 | Requirements for high-risk AI systems under Annex III, transparency obligations, CE marking, EU database registration |
| August 2, 2027 | High-risk AI as safety components in regulated products (medical devices, machinery, vehicles) |
The August 2026 deadline affects the majority of enterprises using AI operationally: HR tools, credit checks, applicant screening, customer scoring, access control systems.
The delay debate
In April 2026, the EU Parliament's Industry Committee (ITRE) voted for a proposal to delay enforcement of high-risk requirements by one year to August 2027. The reasoning: many enterprises — particularly SMEs — are not ready, the necessary harmonized standards are not yet fully available, and premature enforcement without clear standards creates legal uncertainty.
What this means:
- The delay would only affect Annex III high-risk systems (standalone AI)
- Prohibited AI practices (since February 2025) and GPAI obligations (since August 2025) remain unchanged
- The Council of the EU must still approve the proposal — a final decision is expected for Q3 2026
- Even with a delay: the obligations themselves do not change, only the enforcement timeline
For enterprises this means: Do not wait. Delaying compliance work based on a possible extension risks being unprepared if the Council says no.
What is a high-risk AI system?
Annex III of the EU AI Act defines eight categories of high-risk AI systems:
- Biometric identification and categorization — Facial recognition, emotion recognition (where not prohibited)
- Critical infrastructure — AI control of electricity, water, gas, and transport networks
- Education and vocational training — Exam grading, access control for educational institutions
- Employment and workforce management — Applicant screening, AI-assisted performance evaluation, termination decisions
- Access to essential services — Creditworthiness assessment, insurance evaluation, social benefit decisions
- Law enforcement — Risk assessment, lie detection, predictive analytics
- Migration, asylum, and border control — Entry risk assessment, document verification
- Administration of justice and democratic processes — AI-assisted court decisions, election influence
Important: It does not matter whether the system is marketed as "AI." What matters is whether it qualifies as an AI system under the regulation and is deployed in one of these areas. A machine learning model that pre-screens job applications is high-risk — regardless of whether the vendor calls it "intelligent filtering."
Special case: Safety components in products
AI systems used as safety components in regulated products (medical devices under MDR, machinery under the Machinery Regulation, vehicles) also fall under high-risk classification. These are subject to the longer deadline until August 2027, as they are tied to existing EU product legislation.
The seven obligations for high-risk AI
Anyone operating or placing a high-risk AI system on the market must fulfill seven core obligations:
1. Risk management system (Article 9)
Continuous, documented risk management throughout the AI system's lifecycle:
- Identification and analysis of known and foreseeable risks
- Risk assessment under intended use and reasonably foreseeable misuse
- Risk mitigation measures
- Regular review and updates
2. Data governance (Article 10)
Training, validation, and testing must be based on datasets that:
- Ensure relevance, representativeness, and accuracy
- Have undergone bias checks
- Are documented and traceable
- Contain special categories of personal data only under strict conditions
3. Technical documentation (Article 11)
Comprehensive technical documentation must be prepared before placing on the market:
- General description of the system and its purpose
- Detailed description of the development methodology
- Information on training data and its preparation
- Performance metrics and measurement methodology
- Description of risk management measures
4. Record-keeping / Logging (Article 12)
High-risk AI systems must generate automatic records (logs):
- Start and end time of each use
- Reference database against which input was checked
- Input data that led to a match
- Identification of persons involved in human oversight
Logs must be retained for a period appropriate to the intended purpose — at minimum six months.
5. Transparency (Article 13)
Users must be adequately informed:
- About the system's capabilities and limitations
- About the degree of accuracy and known error sources
- About circumstances under which risks may arise
- About human oversight measures
6. Human oversight (Article 14)
High-risk AI systems must be designed to allow effective human oversight:
- The human must be able to understand the system's outputs
- The human must be able to interrupt or stop the system
- The human must be able to disregard or override the system's results
- The human must be able to recognize automation bias
7. Quality management system (Article 17)
Providers of high-risk AI must implement a QMS:
- Strategy for regulatory compliance
- Design and control techniques
- Risk management procedures
- Post-market monitoring
- Cybersecurity measures
Penalties and enforcement
Fines are tiered by severity of violation:
| Violation | Fine (Maximum) |
|---|---|
| Prohibited AI practices | EUR 35M or 7% of global annual turnover |
| High-risk obligations | EUR 15M or 3% of global annual turnover |
| False information to authorities | EUR 7.5M or 1% of global annual turnover |
For SMEs and startups, the lower absolute amounts apply in each case. A frequently overlooked exception: the percentage thresholds always apply — even if the absolute amount would be higher.
Enforcement happens at the national level. Each EU member state must designate a market surveillance authority. In Germany, the Federal Network Agency (Bundesnetzagentur) is designated as the competent authority.
Practical checklist for enterprises
Step 1: Create an AI inventory
Capture all AI systems in the enterprise — not just the obvious ones:
- Which models are in use? (In-house, API-based, embedded in SaaS)
- In what context? (HR, customer service, financial decisions, production)
- Who is the provider, who is the deployer?
- What data flows into the system?
Step 2: Risk categorization
For each identified system, check:
- Does it fall under Annex III? (High-risk standalone)
- Is it a safety component of a regulated product?
- Does it fall under transparency obligations (chatbots, deepfakes)?
- Or is it minimal risk (no specific obligations)?
Step 3: Gap analysis
For each high-risk system, determine the gap between current state and target state:
- Does technical documentation exist?
- Is there a risk management system?
- Are logs being stored?
- Is human oversight implemented?
- Has a bias check been performed?
Step 4: Prepare conformity assessment
Most Annex III high-risk systems can be placed on the market through a self-assessment (internal conformity assessment). Exception: biometric identification systems require assessment by a notified body.
For self-assessment you need:
- Complete technical documentation
- Evidence of a QMS
- EU declaration of conformity
- CE marking
- Registration in the EU database
Step 5: Registration and CE marking
Before placing on the market or putting into service, the system must be registered in the EU database for high-risk AI systems. The database is operated by the EU Commission and is partially publicly accessible.
On-premise AI as compliance advantage
The EU AI Act's requirements sound complex — and they are. But one factor significantly simplifies compliance: control over the infrastructure.
Why local inference helps
| Requirement | Cloud API | On-premise |
|---|---|---|
| Technical documentation | Dependent on provider, often incomplete | Full control over model, version, configuration |
| Logging / Records | API logs often limited, not exportable | All logs local, unlimited retention |
| Data governance | Data leaves the enterprise | Data stays in the data center |
| Human oversight | Difficult with black-box APIs | Full access to model behavior |
| Bias checks | Only possible via output | Input, output, and model weights auditable |
| Risk management | Partially delegated to provider | Entirely within your own sphere of influence |
Concrete example: Applicant screening
An enterprise uses an LLM for pre-screening job applications. This is a high-risk use case (Category 4: Employment).
With Cloud API (e.g., GPT-4):
- Application data (name, CV, photo) sent to US servers
- No insight into model logic (Article 13 transparency problematic)
- Logs only available via API dashboard (Article 12 logging limited)
- Bias checks only on output (Article 10 data governance restricted)
- Under CLOUD Act request: access to applicant data by US authorities possible
With on-premise AI (e.g., Llama 3.3 70B on own GPU server):
- Application data never leaves the enterprise
- Model, prompt, and decision logic fully documentable
- All logs stored locally and auditable
- Bias checks on input data, prompt design, and output possible
- No third-country access, no CLOUD Act issue
Both approaches can be operated in compliance with the EU AI Act. But the on-premise approach makes compliance practically achievable, while the cloud approach relies on provider assurances.
How WZ-IT handles this
We help enterprises build AI infrastructure that is EU AI Act-compliant — not with consulting slides, but with concrete technology:
-
Assessment: Which AI systems are in use? Which fall under high-risk? Which under transparency obligations?
-
Infrastructure design: For high-risk applications: dedicated GPU servers or AI Cube in German data centers. Full control over models, data, and logs.
-
Logging and audit trail: We configure complete request-response logging with timestamps, user IDs, and model versions — exportable for audits and regulatory inquiries.
-
Managed Operations: Ongoing operations including monitoring, updates, and compliance documentation — so the technical documentation doesn't go stale.
-
GDPR + AI Act: We address both regulatory frameworks simultaneously. Local inference solves GDPR transfer issues and simplifies AI Act documentation.
Whether GPU servers for production-grade LLM inference or AI Cube as an entry point — the infrastructure runs in German data centers, operated by European hands.
Related guides
- AI Sovereignty: Why German Companies Should Not Send Data to US AI Services
- GDPR-Compliant AI Inference with GPU Servers
- Ollama vs. vLLM: Comparison for Self-Hosted LLMs
- Local AI Inference with the AI Cube
- Managed Operations: Compliance
- GPU Server at WZ-IT
- AI Cube
As of May 2026. The EU AI Act is an evolving regulation. The deadlines described in this article may change due to ongoing legislative procedures. Enterprises should follow the official channels of the EU Commission and the Federal Network Agency (Bundesnetzagentur).
Frequently Asked Questions
Answers to important questions about this topic
The official date is August 2, 2026. However, the EU Parliament voted in April 2026 to potentially delay enforcement for certain high-risk AI systems to August 2, 2027. The final decision is still pending.
Annex III of the EU AI Act defines eight categories: biometric identification, critical infrastructure, education, employment, creditworthiness assessment, law enforcement, migration, and justice. AI systems used as safety components in regulated products also qualify.
Up to 35 million EUR or 7% of global annual turnover for prohibited AI practices, up to 15 million EUR or 3% for high-risk obligation violations. Lower absolute amounts apply to SMEs.
Technical documentation, risk management system, data governance, record-keeping (logs), transparency information for users, human oversight measures, and a quality management system.
Yes. Local inference provides full control over data, audit trails, and model behavior. This significantly simplifies documentation, risk assessment, and human oversight implementation — especially for high-risk applications.
Generally yes, when an open-source model is deployed in a high-risk system. Exceptions exist for models published under open-source licenses that are not deployed in regulated contexts — but the deployer always bears the compliance obligation.

Written by
Timo Wevelsiep
Co-Founder & CEO
Co-Founder of WZ-IT. Specialized in cloud infrastructure, open-source platforms and managed services for SMEs and enterprise clients worldwide.
LinkedInLet's Talk About Your Idea
Whether a specific IT challenge or just an idea – we look forward to the exchange. In a brief conversation, we'll evaluate together if and how your project fits with WZ-IT.


Timo Wevelsiep & Robin Zins
Managing Directors of WZ-IT




