Products

The full toolchain.

Every AiExponent open source tool, the regulatory articles they answer, and the enterprise runtime governance platform in active development. Free tools are Apache 2.0, zero telemetry, installed in 30 seconds. Sigil early access is open.

Free · Apache 2.0 · Zero Telemetry

Three flagship tools. One compound moat.

Each tool maps to a specific EU AI Act obligation and produces a concrete, audit-ready artefact — a signed SBOM, a Risk Management File, or a benchmark report. Install via pip, ship evidence today.

01
Article 53

License Compliance Checker

Scans AI models, software packages, and agentic pipelines for license compliance across 8 ecosystems. Detects HuggingFace model references in code, GGUF/ONNX files, and generates EU AI Act Article 53 audit evidence with an honest dataset risk registry.

Regulatory relevance

GPAI Compliance · Generates audit evidence supporting EU AI Act Article 53 documentation obligations — evaluates model card completeness, license compliance, and training data risk for AI components in your stack.

bashpip install license-compliance-checker
02
Article 9

RiskForge

Guided 8-dimension risk assessment CLI with 50+ questions drawn from EU AI Act Article 9 requirements, Annex III pattern matching, and SHA-256 hash-chained audit trail. Produces a legally-defensible Risk Management File (JSON + PDF) that satisfies Annex IV documentation requirements, in approximately 30 minutes instead of weeks of consulting work.

Regulatory relevance

Risk Management · Produces audit-ready Article 9 risk management files for high-risk AI systems. Covers all 8 EU AI Act risk dimensions — health & safety, fundamental rights, discrimination, privacy, transparency, human oversight, robustness, and data governance — with cross-maps to NIST AI RMF and ISO/IEC 42001.

bashpip install riskforge
03
Article 15

RAG Benchmarking

Plug in any RAG system — LangChain, LlamaIndex, or custom — and benchmark it against classic and agentic-era metrics. Faithfulness, answer relevancy, retrieval precision, and four agentic metrics for multi-step agents. Measured faithfulness of 0.958 on the 50-sample golden dataset.

Regulatory relevance

Accuracy Requirements · Provides systematic accuracy testing and documentation for high-risk AI systems under Article 15.

bashpip install rag-benchmarking

Infrastructure Layer

Evidence processing utilities

Cross-cutting tooling that feeds the flagships. Alpha stage — not marketed as standalone governance tools. Docker deployment.

Agentic Document Analyser

Articles 11 + 18

Converts unstructured compliance documents — risk assessments, model cards, contracts, audit logs — into structured JSON using Vision-Language Models. Acts as the evidence processing layer for the AiExponent compliance toolchain. Feeds Article 11 technical documentation and Article 18 log preservation workflows.

Infrastructure

Cross-Framework Coverage

One evidence workflow. Many jurisdictions.

Our tools cross-map to the major AI regulations worldwide, so one evidence artefact can satisfy obligations in multiple jurisdictions.

PrimaryEnforced

EU AI Act

The world's first comprehensive AI regulation. Phased enforcement 2024–2027.

Articles 4, 5, 9, 10, 13, 15, 53, 72

Up to €35M or 7% of global annual turnover (whichever is higher)

US FederalMandatory

NIST AI RMF

Mandatory for US federal contractors. De facto standard for US enterprise AI.

Govern · Map · Measure · Manage

EO 14110 · OMB M-24-10

InternationalProcurement gate

ISO/IEC 42001:2023

AI Management System standard. Certification increasingly required for enterprise procurement.

39 Annex A controls

Maps to EU AI Act Annex C

CanadaPending 2026

Canada AIDA

Bill C-27 modelled on EU AI Act. Expected passage mid-2026 with 2-year implementation.

High-impact AI risk assessments

Up to $25M penalties

In Development · Early Access Open

Sigil — Runtime Governance Platform

Commercial AI agent governance platform in active development. Real-time policy enforcement, tamper-evident audit logs, and compliance reporting across EU AI Act Articles 14/17 (human oversight + quality management), NIST AI RMF, and ISO/IEC 42001. Early access available on request.

Articles 14, 17Runtime GovernanceNIST AI RMFISO/IEC 42001

What you'll get at launch

Real-time policy enforcement

Block or amend AI agent actions at runtime before they reach users or downstream systems.

Tamper-evident audit log

SHA-256 hash-chained, append-only. Verifiable with a single command. Article 12/17 ready.

Article 14 human oversight

Configurable human-in-the-loop gates on high-impact actions with structured reviewer evidence.

Cross-framework reporting

One evidence layer, multiple compliance outputs — EU AI Act, NIST AI RMF, ISO/IEC 42001.

Pricing. We're finalising pricing with design-partner customers. Early-access participants help shape the tiers and get founding-customer terms.

Want Sigil before launch?

We're working with a small number of design-partner teams before general availability. If you're building high-risk AI systems and want runtime governance aligned to EU AI Act Articles 14 & 17, get in touch.

Request Early Access

These tools answer specific obligations. For programme-level regulatory design across an AI portfolio, the sister practice is at askajay.ai →