LitmusAI

Article 5Flagshipv1.0.0

Screens any AI system description against the eight Article 5 prohibitions and returns per-category Red / Amber / Clear verdicts with regulatory citations, confidence levels, and remediation guidance. Conservative-by-default: prefers Amber over Clear on ambiguity. UNREVIEWED reference ruleset; signed BYO rulesets supported. Apache 2.0, zero network calls, runs entirely offline. The PyPI distribution is `litmus-screener` (the brand is "LitmusAI"; an unrelated `litmus-ai` package already exists on PyPI).

On this page

Quick Start

bashpip install litmus-screener
python# Quick screen from a free-text description
litmus screen --describe "a chatbot for mental health support for teenagers"

# Or from a structured YAML file
litmus init                    # creates a system.yaml template
litmus screen system.yaml --output report.json

# Bring your own lawyer-signed ruleset
litmus use-ruleset your-firm-ruleset.json
litmus screen system.yaml      # report header now reads "(SIGNED by: ...)"

# CI integration with conventional exit codes (0 / 1 / 2 / 3)
litmus screen system.yaml --fail-on red --output report.sarif

Features

  • All 8 Article 5 sub-points (5.1.a–h) covered by a 22-rule reference ruleset
  • Constrained expression language — no `eval`, no Python execution at evaluation time
  • Per-rule confidence band (high / medium / low) on every verdict
  • SHA-256 input hash on every report; canonical JSON ordering (RFC 8785)
  • SARIF 2.1.0 output for GitHub Advanced Security / GitLab SAST / Azure DevOps
  • Markdown + JSON + optional PDF (WeasyPrint) exporters
  • Bring-Your-Own-Ruleset with signed `RulesetSignature` block
  • `litmus diff-ruleset` for structural diffs between ruleset versions
  • Zero network calls during screening (CI-enforced via pytest-socket)
  • GitHub Action wrapper at `aiexponenthq/litmusai/.github/actions/litmusai-screen@v1`

Regulatory Foundation

Article 5Prohibited AI practicesApplication date 2025-02-02· Enforced

What the regulation requires

1. The following AI practices shall be prohibited: (a) the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm; (b) the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm; (f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons; (g) the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
5(1)(a)5(1)(b)5(1)(f)5(1)(g)

What you face if you don't comply

Article 5 has been enforceable since 2 February 2025 — placing or using a prohibited-practice AI system on the EU market today exposes the provider, importer, distributor, or deployer to the highest tier of fines in the regulation: up to €35M or 7% of global annual turnover under Article 99(3). The eight prohibitions are absolute — no consent, opt-out, or post-hoc mitigation rescues a prohibited practice once the system meets the prohibition's criteria. Pre-deployment screening before code is shipped is the only defensible posture.

Up to €35M or 7% of global annual turnover, whichever is higher
Article 99(3) · Penalties

How LitmusAI addresses this

  • 5(1)(a)Detects subliminal-manipulation indicators in system descriptions and outputs; flags when a system materially distorts behaviour against the user's interest
  • 5(1)(b)Pattern-matches vulnerable-population markers (minors, persons with disabilities, persons in vulnerable economic situations) and flags exploitation patterns
  • 5(1)(f)Emits a Red verdict on any system combining emotion inference with workplace or education deployment context (without the medical/safety carve-out)
  • 5(1)(g)Detects untargeted-facial-image-scraping indicators (web crawl + facial recognition + database creation) — the exact pattern Article 5(1)(g) prohibits

Source: eur-lex.europa.eu/…/CELEX:32024R1689 · Retrieved

Known Limitations

  • Reference ruleset is UNREVIEWED — internally panel-authored, no external EU AI Act lawyer review yet.
  • Conservative-by-default: prefers Amber over Clear on ambiguity. The trade-off is more false-positives, never false-negatives on Red.
  • A screening is a screening, not a legal certification. Final determination requires qualified counsel.
  • BYO-ruleset signature verification is structural in v1.0; cryptographic signature verification lands in v1.1.
  • Article 5 only — Articles 6 (high-risk classification), 9 (risk management), 13 (transparency) are out of scope. See RiskForge / TransparencyDeck.

For the most current status, see GitHub issues.

Contributing

Contributions are welcome — Apache 2.0 licensed. See the contributing guide and open issues.

License

Licensed under the Apache License 2.0. Not legal advice. Not a notified body.

The Compound Moat

One tool is a start. The chain is the moat.

Each AiExponent tool produces structured evidence the next tool consumes. Browse the full toolchain — from Article 5 screening through Article 72 post-market monitoring.

See all tools →