Strategy

EU AI Act Compliance: What Software Teams Need to Build Before August 2026

A practical compliance guide to the EU AI Act for software development teams. Risk classification, documentation requirements, technical obligations, and a step-by-step implementation checklist before the August 2026 deadline.

Notix Team
Notix Team Software Development Experts
| · 13 min read
EU AI Act Compliance: What Software Teams Need to Build Before August 2026

EU AI Act Compliance: What Software Teams Need to Build Before August 2026

The EU AI Act is the most significant piece of AI regulation in history. It entered into force on August 1, 2024. The prohibited practices provisions took effect on February 2, 2025. And the big one — the full application of rules for high-risk AI systems — takes effect on August 2, 2026. That’s less than six months from now.

If your software serves EU users or your company operates in the EU, this applies to you. The regulation’s territorial scope mirrors GDPR: it applies to any AI system placed on the EU market or used in the EU, regardless of where the provider is based. A company in the United States, Serbia, or Singapore that deploys AI-powered features accessible to EU residents falls under this regulation.

The penalties are real. Violations of the prohibited practices rules carry fines of up to 35 million euros or 7% of global annual turnover — whichever is higher. For other violations, fines reach 15 million euros or 3% of turnover. For providing incorrect information to authorities, it’s 7.5 million euros or 1% of turnover. These are GDPR-level penalties, and regulators have shown with GDPR that they’re willing to enforce them.

This article covers what software teams need to understand and build to comply with the EU AI Act before the August 2026 deadline.

The Risk Classification System: Where Does Your AI Fit?

The EU AI Act uses a risk-based approach. Not all AI systems are treated equally. The regulatory burden depends on the risk level of your system.

Unacceptable Risk (Prohibited)

These AI systems are banned outright as of February 2025:

  • Social scoring. AI systems that evaluate people based on social behavior or personal characteristics for purposes unrelated to the original data collection.
  • Exploitative systems. AI that exploits vulnerabilities of specific groups (children, disabled persons, economically vulnerable people).
  • Real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions).
  • Emotion recognition in workplaces and educational institutions.
  • Untargeted facial image scraping to build recognition databases.
  • Predictive policing based solely on profiling or personality traits.

If your software does any of these things, stop. There is no compliance pathway. These are prohibited.

High Risk

This is where most of the compliance work lives. AI systems are classified as high-risk if they fall into specific categories defined in Annex III of the regulation:

  • Biometric identification and categorization of natural persons.
  • Critical infrastructure management — AI used in the management and operation of road traffic, water, gas, heating, and electricity supply.
  • Education and vocational training — AI used for determining access to education, evaluating learning outcomes, or assessing appropriate education levels.
  • Employment and worker management — AI for recruitment, screening, evaluating candidates, making promotion or termination decisions, task allocation, and performance monitoring.
  • Access to essential services — AI for credit scoring, insurance pricing, evaluating public assistance eligibility, or dispatching emergency services.
  • Law enforcement — AI for risk assessment of individuals, polygraphs, evidence evaluation, and crime prediction.
  • Migration and border control — AI for risk assessment, document verification, and asylum application processing.
  • Administration of justice — AI for researching and interpreting facts and law, and applying the law to facts.

If your AI system performs any of these functions, it’s likely high-risk and subject to the full set of compliance requirements.

Limited Risk

AI systems with specific transparency obligations. This primarily covers:

  • Chatbots and conversational AI — users must be informed they’re interacting with AI.
  • Emotion recognition systems (where not prohibited) — users must be informed.
  • Deep fakes — must be clearly labeled.
  • AI-generated content — must be marked as machine-generated.

The requirement here is disclosure, not prohibition.

Minimal Risk

AI systems not covered by the above categories — spam filters, AI-powered recommendations, game AI, most analytical tools. No specific compliance requirements beyond voluntary codes of practice, though general product safety and consumer protection laws still apply.

What High-Risk AI Systems Must Implement

If your system is classified as high-risk, here are the specific technical and organizational requirements your software team needs to build.

1. Risk Management System (Article 9)

You need a documented, iterative risk management process that runs throughout the AI system’s lifecycle. This isn’t a one-time assessment. It’s a continuous process that includes:

  • Identification and analysis of known and foreseeable risks.
  • Estimation and evaluation of risks that may emerge when the system is used as intended and under conditions of reasonably foreseeable misuse.
  • Risk mitigation measures — design choices, technical safeguards, and operational procedures.
  • Residual risk assessment — after mitigation, what risks remain? Are they acceptable?

For software teams, this means building risk management into your development process. Every sprint that touches the AI system should include a risk assessment review. Every deployment should document the risk state.

2. Data Governance (Article 10)

Training, validation, and testing datasets must meet specific quality criteria:

  • Relevance and representativeness. The data must be appropriate for the system’s purpose and reflect the population it will serve.
  • Bias examination. You must actively examine training data for biases and implement mitigation measures.
  • Gap identification. Identify gaps in the data that could lead to discriminatory outcomes.
  • Data integrity. Ensure data is accurate, complete, and free of errors.

Practically, this means building data quality pipelines with bias detection. You need documentation that traces every dataset used in training, its characteristics, its known limitations, and the bias mitigation steps applied.

3. Technical Documentation (Article 11)

Before an AI system is placed on the market, you must prepare comprehensive technical documentation. This is not your standard API documentation or README. The regulation specifies what must be included:

  • General description of the AI system.
  • Detailed description of the elements and development process.
  • Monitoring, functioning, and control of the AI system.
  • Information about the system’s accuracy, robustness, and cybersecurity.
  • Description of the risk management system.
  • Description of any changes made during the system’s lifecycle.

For software teams, this means maintaining living documentation that tracks the system’s evolution. Version control for documentation is as important as version control for code.

4. Record Keeping and Logging (Article 12)

High-risk AI systems must be designed to automatically log events throughout the system’s lifecycle. Logs must enable:

  • Tracing the system’s operation — what inputs led to what outputs and what decisions.
  • Monitoring compliance with the regulation.
  • Post-market monitoring by the provider.

The logs must be retained for a period appropriate to the system’s purpose, and at least six months unless provided otherwise by applicable law.

For software teams, this means building comprehensive audit logging into the AI system from the start. Every inference, every decision, every input-output pair must be traceable. This is not optional logging you can add later — it’s a design requirement.

5. Transparency and User Information (Article 13)

High-risk AI systems must be designed to ensure sufficient transparency for deployers to interpret and use the system’s output appropriately. You must provide clear instructions for use that include:

  • The system’s intended purpose.
  • The level of accuracy, robustness, and cybersecurity the system was tested and validated for.
  • Known or foreseeable circumstances that may lead to risks.
  • The system’s performance with respect to specific groups of persons.
  • Specifications for input data.
  • Human oversight measures.

This translates to building user-facing documentation and interfaces that make the AI system’s behavior understandable. Confidence scores, explanation features, and clear communication of limitations are not nice-to-haves — they’re compliance requirements.

6. Human Oversight (Article 14)

High-risk AI systems must be designed to allow effective human oversight during the period they’re in use. The system must enable humans to:

  • Fully understand the system’s capabilities and limitations.
  • Monitor its operation, including through dashboards and alerting.
  • Interpret the system’s outputs correctly.
  • Decide not to use the system or override its output in any particular situation.
  • Intervene or interrupt the system’s operation (the “stop button”).

For software teams, this means building human-in-the-loop interfaces, override mechanisms, and monitoring dashboards. The system must be designed so that a human can always take control. Automated AI decisions that cannot be overridden will not comply.

7. Accuracy, Robustness, and Cybersecurity (Article 15)

High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle. Specific requirements include:

  • Accuracy metrics declared in the instructions for use.
  • Resilience against errors, faults, and inconsistencies in the data.
  • Protection against adversarial attacks — prompt injection, data poisoning, model manipulation.
  • Cybersecurity measures appropriate to the risks, including protection of the training data.

This means building adversarial testing into your QA pipeline, implementing input validation and output filtering, and maintaining cybersecurity measures that go beyond basic web application security.

Practical Compliance Checklist for Software Teams

Here’s a step-by-step checklist organized by urgency, given the August 2026 deadline.

Immediate (Now — March 2026)

  • Classify your AI systems. Map every AI feature in your product against the risk classification. Be thorough — AI components embedded in larger systems still count.
  • Audit prohibited practices. Verify none of your systems fall into the prohibited category. If they do, decommission them immediately.
  • Assign responsibility. Designate a team or individual responsible for AI Act compliance. This could be a cross-functional team spanning engineering, legal, and product.
  • Inventory your data. Document all datasets used for training, validation, and testing. Record their sources, characteristics, and known biases.

Short-Term (March — May 2026)

  • Implement risk management. Establish a formal risk management process for each high-risk AI system. Document identified risks, mitigation measures, and residual risks.
  • Build audit logging. Implement comprehensive event logging for all high-risk AI systems. Ensure logs capture inputs, outputs, decisions, and all intermediate reasoning steps.
  • Design human oversight interfaces. Build monitoring dashboards, override mechanisms, and alert systems that enable effective human oversight.
  • Conduct bias testing. Test your AI systems for discriminatory outcomes across protected demographic categories. Document results and mitigation actions.
  • Prepare technical documentation. Start drafting the technical documentation required by Article 11. This is extensive and cannot be done in a week.

Medium-Term (May — July 2026)

  • Implement transparency features. Build explanation mechanisms, confidence scores, and limitation disclosures into user-facing AI features.
  • Conduct adversarial testing. Test AI systems against prompt injection, data poisoning, and other adversarial attacks. Document findings and mitigations.
  • Establish post-market monitoring. Build systems to continuously monitor AI performance, accuracy, and bias after deployment. Create processes for addressing drift and degradation.
  • Complete conformity assessment. For high-risk systems, complete the required conformity assessment procedure. Some categories require third-party assessment.
  • Register in the EU database. High-risk AI systems must be registered in the EU database before they’re placed on the market.

Ongoing (August 2026 and Beyond)

  • Continuous monitoring. Run post-market monitoring continuously. Track accuracy, bias, and incident reports.
  • Incident reporting. Report serious incidents to national authorities within specified timeframes.
  • Documentation updates. Keep all documentation current as the system evolves.
  • Periodic risk reassessment. Repeat risk assessments at regular intervals and whenever significant changes are made.

How This Affects Different Types of Software Companies

SaaS Companies with AI Features

If you’ve added AI-powered features to your SaaS product — automated resume screening, credit scoring, AI-generated content — each feature needs individual risk classification. The fact that the AI is one feature among many doesn’t exempt it from compliance. Pay particular attention to features in the high-risk categories: HR tools with AI screening, financial products with AI scoring, and educational platforms with AI assessment.

AI Startups

If AI is your core product, the AI Act affects your entire business. The compliance investment is significant but unavoidable. Build compliance into your development process from the start — retrofitting is more expensive. The good news: compliance can be a competitive advantage. Enterprise buyers in the EU are increasingly requiring AI Act compliance from their vendors. Being compliant early means you can sell to organizations that your non-compliant competitors cannot.

Enterprise IT Teams

If you’re building internal AI tools (automated document processing, internal chatbots, workflow automation), many of these fall into limited or minimal risk categories. But check carefully — an internal HR tool that uses AI for performance evaluations could be high-risk. Internal use doesn’t exempt you from the regulation.

Software Development Agencies

If you build AI systems for clients, you need to ensure that the systems you deliver are compliant. This means understanding the client’s use case well enough to classify the risk level and building the compliance infrastructure (logging, documentation, oversight) as part of the deliverable. At Notix, we’ve incorporated EU AI Act compliance considerations into our AI development process. When we built the FENIX AI-powered quoting system for manufacturing, compliance-by-design wasn’t required at the time, but the architecture decisions we made — audit logging, human oversight, transparent decision-making — align with what the AI Act now mandates. That kind of forward-thinking architecture saves significant rework.

Common Misconceptions

“My AI system is low risk because it just recommends things”

Depends on what it recommends. A product recommendation engine is minimal risk. An AI system that recommends whether to approve a loan application is high risk. The risk classification depends on the domain and the impact on individuals, not on the technical mechanism.

“This only applies to EU companies”

No. It applies to any AI system placed on the EU market or whose output is used in the EU. If you have EU customers, it likely applies to you.

“We use third-party AI models, so the model provider is responsible”

Partially. Model providers (like OpenAI, Anthropic, Google) have obligations as providers of general-purpose AI models. But if you build a high-risk application on top of those models, you’re the provider of that application and bear the compliance obligations for the application layer.

No. The AI Act requires technical measures — logging, monitoring, human oversight, bias testing. Legal disclaimers don’t substitute for engineering work.

“We’ll wait and see if it’s actually enforced”

The EU spent years developing GDPR enforcement infrastructure. They’ve levied billions in fines. The AI Act uses the same enforcement model with the same national authorities. Enforcement will happen.

The Business Case for Early Compliance

Compliance isn’t just a cost — it’s a competitive differentiator. Consider:

  • Market access. Non-compliant AI systems cannot be legally deployed in the EU market. The EU represents 450 million consumers and some of the world’s largest enterprise buyers.
  • Enterprise sales. Large organizations are already including AI Act compliance in their procurement requirements. Being compliant opens doors that non-compliance closes.
  • Trust. In a market where AI trust is a growing concern, demonstrable compliance signals responsibility. This matters for B2B sales, partnerships, and public perception.
  • Risk reduction. The compliance requirements — risk management, logging, human oversight, bias testing — are engineering best practices. They reduce the risk of AI failures, security incidents, and reputational damage regardless of the regulatory environment.

What to Do Right Now

If you haven’t started your EU AI Act compliance work, start today. The August 2026 deadline is not far away, and the technical implementation requirements are substantial.

  1. Classify your AI systems. This is step zero. You can’t plan compliance work without knowing which of your systems are affected and at what risk level.

  2. Prioritize high-risk systems. Focus your compliance investment where the regulatory requirements are highest and the penalties for non-compliance are most severe.

  3. Build the logging infrastructure first. Audit logging is a prerequisite for almost everything else — risk management, human oversight, post-market monitoring, and incident reporting all depend on having comprehensive event logs.

  4. Engage legal expertise. The AI Act is complex, and interpretation matters. Legal counsel experienced in EU technology regulation can help you classify your systems correctly and identify requirements you might miss.

  5. Consider compliance-as-architecture. The most efficient path to compliance is building the requirements into your system architecture rather than bolting them on afterward. If you’re building new AI features, design for compliance from the start.

The EU AI Act represents a fundamental shift in how AI systems are governed. For software teams, it means that building AI is no longer just a technical challenge — it’s a regulatory one. The teams that integrate compliance into their development process will build better AI systems and have access to markets that their competitors will be locked out of.

Share

Ready to Build Your Next Project?

From custom software to AI automation, our team delivers solutions that drive measurable results. Let's discuss your project.

Notix Team

Notix Team

Software Development Experts

The Notix team combines youthful ambition with seasoned expertise to deliver custom software, web, mobile, and AI solutions from Belgrade, Serbia.