CLIC by RegOps · The compliance layer for AI engineering

Classify. Implement. Ship.

ClassifyImplementShip// continuous · ci/cd native

Compliance runs where your code runs. CLIC scans your AI codebase, gives you an EU AI Act classification with the legal reasoning behind it, and emits the engineering tasks that follow.

// phase 01 · classify — fastest path to AI Act classification

CLIC finds what the AI Act requires.

CLIC identifies regulated AI systems in your repository, maps every obligation, and turns them into concrete engineering tasks.

01scan/repositoryacme/lending-app
// 1,284 files · 2 candidates found
app/
  ·layout.tsx
  ·page.tsx
lib/
  ·utils.ts
  ·scoring.ts
  ·recommend.ts
models/
  ·credit-risk.pkl
  ·fraud-detect.onnx
  ·embeddings.bin
pipelines/
  ·training.py
tests/
node_modules/
·package.json
credit-scoring [HIGH RISK · Annex III §5(b)]
fraud-detection [LIMITED · Art. 50]
2 candidates · 1 high-risk · 1 limited-risk
02$ clic check --repo . --json● ready
"status": "warn",
"candidateCount": 2,
"candidates": [
{"label": "credit-scoring","riskClass": "high" ,"evidenceSummary": "Annex III §5(b) · Art. 9 · Art. 12","tasks": [{"title": "Add transparency notice", "priority": "high"}{"title": "Implement audit logging", "priority": "high"}{"title": "Risk management hooks", "priority": "medium"}]},
{"label": "fraud-detection","riskClass": "limited" ,"evidenceSummary": "Art. 50 · transparency obligation","tasks": [{"title": "Add transparency notice", "priority": "medium"}]}
]
2 systems · 4 tasks
// or connect via MCP — zero config
// phase 02 · implement

CLIC Agent implements it.

CLIC and CLIC Agent both ship with an MCP server. Connect once — classify, configure git hooks, manage tasks, and trigger implementation without touching a terminal.

01CLIC/scantasks.json
// 7 obligations · Annex III §5b
T-01
Add transparency notice
Art. 50(1) · user-facing
HIGH
T-02
Log inference decisions
Art. 12 · audit trail
HIGH
T-03
Risk mgmt hooks
Art. 9 · pre-deploy
MED
T-04
Data governance check
Art. 10 · training
MED
02CLIC Agent/agent● ready
analysing repository
[plan] 4 tasks · 12 files affected
[T-01] insert components/AIDisclosure.tsx
mount in app/layout.tsx
[T-02] wrap lib/inference.ts
with audit logger · +34 lines
[T-03] add CI gate
.github/workflows/risk.yml
[evidence] writing .clic/
artefacts → conformity bundle
[mcp] git hook configured
[mcp] tasks.json → agent context
committing branch clic/eu-ai-act
~4m avg. runtime · MCP server included
03repo/pull request+ ready to merge
app/layout.tsx+8 −2
  return (
    <html>
      <AIDisclosure />
      <Layout>
lib/inference.ts+34 −6
  return model.predict(input);
  const out = await audit.wrap(
    () => model.predict(input),
    { article: 12, system })
  return out;
.clic/evidence.md+1 file
.clic/tasks.md+1 file
// 01
Any stack, zero config
Python, TypeScript, Go, Rust. CLIC reads your repository as-is. Regulated AI systems surface from what's actually in your code.
// 02
Fact-Linked Evidence
Every finding is cited to an AI Act article or recital, creating verifiable claims your engineers can act on.
// 03
CI/CD & MCP Native
Run headless in your CI/CD pipeline as a git hook, or connect via MCP server for interactive control. Classify on commit, configure hooks, pipe tasks into your issue tracker — your workflow, your interface.
// honest engineering

Three outcomes, not two.

Where the AI Act is unambiguous, CLIC gives a definitive classification. Where genuine boundary cases exist — GPAI edge cases, contested scope questions, novel deployment patterns — CLIC returns needs_review with the specific legal question surfaced.

There is no hidden editorial layer. needs_review is not a failure state; it is the precise answer for cases the Act itself leaves open. It tells your legal team exactly where to focus — and it is a stronger signal than a false positive.

coveredAI Act obligations apply. Engineering tasks emitted.
not_coveredSystem outside AI Act scope. Reasoning on file.
needs_reviewBoundary case. Specific legal question flagged for counsel.
// AI Act rollout — where we are today
Feb 2025
Prohibitions active
Art. 5
Aug 2025
GPAI obligations
Art. 51–55
Aug 2026
High-risk deadline
Art. 6 + Annex III
→ now
2027
Full enforcement
All obligations
common questions
Can engineers use CLIC without legal expertise?
Yes. CLIC reads your codebase and outputs concrete engineering tasks — no legal background required. Your legal team reviews the findings.
How does CLIC fit into an existing pipeline?
CLIC runs via Git hooks or a single CLI command — no new infrastructure required.
// built by
Frederik SchmittelAI Engineer

CLIC is built by an AI Engineer with a background in autonomous systems, aerospace informatics, AI engineering, and technical consulting. His work spans production-oriented LLM systems, backend architectures, AI workflows, and regulated infrastructure environments, including experience in railway and enterprise software contexts.

The product is built from an engineering perspective: translating AI Act classification, evidence, and obligations into concrete technical outputs that development teams can actually use.

// ready when you are

From AI codebase to audit-ready system.

Book a demo