HomeNews ChannelArtificial IntelligenceThe AI Act: How we build compliance into the DNA of your AI system

The AI Act: How we build compliance into the DNA of your AI system

Welcome to a new era of regulation. The EU’s AI Act is not just another set of rules—it’s a fundamental shift in the approach to creating and deploying artificial intelligence systems, especially those designated as high-risk. For many companies, this is a challenge. For us at Formal Foundry, it’s a validation of our mission.

The traditional approach of testing a finished product for errors is insufficient in the context of the AI Act. It’s like looking for a needle in a haystack while the haystack itself is growing and changing shape. Our answer is simple, though based on advanced technology: compliance by design. Instead of testing if a system is compliant, we design it so that compliance is its inviolable, mathematically proven characteristic.

Let’s explore how our technology directly addresses the key requirements of the AI Act.


Challenge #1: Strict Requirements for High-Risk AI Systems (Title III, Chapter 2)

The AI Act imposes the most stringent obligations on systems that can significantly impact the health, safety, or fundamental rights of citizens. According to Article 6 and Annex III, these include systems used in:

  • Critical infrastructure (e.g., traffic management, water supply).
  • Education (e.g., evaluating exams, admissions).
  • Employment (e.g., sorting CVs, evaluating employees).
  • Essential private and public services (e.g., credit scoring).
  • Administration of justice and democratic processes.
  • Medical devices.

For these systems, an error isn’t a statistic. It’s real-world harm.

Formal Foundry’s Solution: Proofs Instead of Promises

The key to risk management is certainty of operation. Our approach is built on formal methods and proof assistants.

  • How it works: Instead of describing a system’s rules in natural language (which is inherently ambiguous), we formalize domain knowledge. We translate complex legal regulations, medical standards, or business rules into a precise, mathematical language.
  • The result: The AI system doesn’t just learn the rules; it operates within a framework where it can be mathematically proven that its behavior complies with those rules. We eliminate the risk of “hallucinations” or unpredictable behavior at critical moments.

Challenge #2: Accuracy, Robustness, and Cybersecurity (Article 15)

The AI Act requires high-risk systems to operate with an “appropriate level of accuracy, robustness, and cybersecurity.” They must be resilient to errors, inconsistencies, and attempts at manipulation.

Formal Foundry’s Solution: Correctness at Scale

This is the core of our value proposition: ensuring Correctness at Scale. Instead of relying on statistical accuracy metrics that always have a margin of error, we deliver proof of correctness.

  • Accuracy: Our architecture verifies every output of the AI system. The proof generation component, using a type checker, validates that the generated output has a formal proof of compliance with the predefined rules. If there is no proof, the output is not considered correct. This is a binary guarantee, not an approximation.
  • Robustness: Formally verified systems are inherently more robust. Their logic is consistent and free of internal contradictions. This ensures they operate predictably, even in unusual or edge cases that traditional testing often misses. Our Category Theory Solver is a testament to our ability to formalize and automate sophisticated reasoning, making workflows verifiable.

Challenge #3: Transparency and Provision of Information (Article 13) & Human Oversight (Article 14)

The regulation places immense emphasis on ensuring that AI systems are not “black boxes.” Users must understand how a system works, and humans must be able to effectively supervise and intervene.

Formal Foundry’s Solution: Translating Proofs into Understandable Language

We understand that Agda code isn’t for everyone. That’s why a key element of our architecture is the translation component.

  • Understandable Explanations: We can translate a formal proof of correctness into semi-natural language. This allows an auditor, regulator, or even an end-user to receive a clear, logical justification for why the system made a particular decision—step-by-step, according to precisely defined rules.
  • Effective Oversight: By giving people insight into the AI’s decision-making process, we enable real, not illusory, supervision. An operator can see which rule was applied and why, which is crucial for meeting the requirements of Article 14.

Challenge #4: Technical Documentation and Record-Keeping (Article 11 & 12)

Companies must be able to demonstrate ex post how their system was built, on what principles it operates, and how it was validated. This requires precise and consistently updated documentation.

Formal Foundry’s Solution: Documentation as a Living Artifact

In our approach, the formalization is simultaneously the specification, the code, and the documentation.

  • A Single Source of Truth: The formal record of business or regulatory logic serves as precise, unambiguous documentation of the system’s behavior. There is no room for discrepancies between what the system was intended to do and what it actually does.
  • Automated Record-Keeping: Every operation verified by our system leaves a trace in the form of a formal proof. This creates an immutable and auditable record (log) that directly addresses the requirements of Article 12 on record-keeping. Our work on the Agda Server and Agda Runtime demonstrates our capability to create practical infrastructure for deploying and logging formal verification methods in industrial settings.

Conclusion: The Future of AI Act Compliance

The AI Act is not an obstacle but an opportunity to build a new generation of trustworthy AI systems. At Formal Foundry, we don’t just help companies “check the boxes” on a regulation. We provide the technology to build trust, safety, and transparency into the very architecture of their solutions.

We offer a shift from the uncertainty of heuristics to the certainty of a mathematical proof. In the age of high-risk AI, this is the only way forward.