SERVICE

AI Governance & Compliance

The AI Act is live. The algoritmeregister obligations are in effect. DPIAs must cover AI-specific risks. We put classification, risk register, and model and vendor governance in place without replacing your legal function.

Who it is for

Legal, compliance, and data-protection teams who need to evidence AI Act readiness. CISOs and risk officers in organisations running AI in production. Public-sector bodies preparing for AI Act enforcement and algoritmeregister obligations.

  • Legal, compliance, data-protection
  • CISOs and risk officers
  • Public and semi-public organisations

Problem it solves

Most organisations discover, late, that their AI is un-classified, their risk register is empty, and no one owns model or vendor governance. This service fixes that.

  • "AI Act enforcement is coming, are we ready?"
  • "Our DPIAs do not cover AI risks."
  • "Who approves a new AI model?"

What a typical engagement looks like

Team: A governance lead from our team, with a technical lead for model and vendor work.

Approach

  • Documented classification per use case, reasoned.
  • Working risk register in the client's GRC tool, not a one-off spreadsheet.
  • Model and vendor governance integrated with existing policies.
  • Human-oversight patterns engineering teams can actually implement.
  • Integratio does not lawyer. The output supports the legal function; it does not replace it.

Proof

TODO(sanne): anonymised example of an AI Act classification engagement, naming the sector.

What this service is not

  • Legal advice; we design, your legal team keeps accountability.
  • A generic GRC implementation.
  • A one-shot classification with no operational embedding.

Ready to start?

Every first conversation is with someone who actually builds.