Classify. Implement. Ship.
Compliance runs where your code runs. CLIC scans your AI codebase, gives you an EU AI Act classification with the legal reasoning behind it, and emits the engineering tasks that follow.
CLIC finds what the AI Act requires.
CLIC identifies regulated AI systems in your repository, maps every obligation, and turns them into concrete engineering tasks.
"candidateCount": 2,
"candidates": [
CLIC Agent implements it.
CLIC and CLIC Agent both ship with an MCP server. Connect once — classify, configure git hooks, manage tasks, and trigger implementation without touching a terminal.
mount in app/layout.tsx ✓
with audit logger · +34 lines ✓
.github/workflows/risk.yml ✓
artefacts → conformity bundle
Three outcomes, not two.
Where the AI Act is unambiguous, CLIC gives a definitive classification. Where genuine boundary cases exist — GPAI edge cases, contested scope questions, novel deployment patterns — CLIC returns needs_review with the specific legal question surfaced.
There is no hidden editorial layer. needs_review is not a failure state; it is the precise answer for cases the Act itself leaves open. It tells your legal team exactly where to focus — and it is a stronger signal than a false positive.
CLIC is built by an AI Engineer with a background in autonomous systems, aerospace informatics, AI engineering, and technical consulting. His work spans production-oriented LLM systems, backend architectures, AI workflows, and regulated infrastructure environments, including experience in railway and enterprise software contexts.
The product is built from an engineering perspective: translating AI Act classification, evidence, and obligations into concrete technical outputs that development teams can actually use.