Julian Kusenberg IT Beratung

Microsoft Purview · Compliance · AI Governance

Microsoft Purview · Compliance · Data Security

The EU AI Act Is Not Waiting for Your Perfect Governance Deck

·

EU AI Act  ·  Microsoft Purview  ·  AI Governance Your legal team is reading the regulation. Your security team is asking what needs to be logged. Your business teams are already testing Copilot, custom…

EU AI Act  ·  Microsoft Purview  ·  AI Governance

Your legal team is reading the regulation.

Your security team is asking what needs to be logged.

Your business teams are already testing Copilot, custom agents and AI-enabled workflows.

And somewhere between those three realities, someone will eventually ask the uncomfortable question:

What have we actually configured?

That is where many AI governance discussions become strangely abstract. Everyone agrees that accountability matters. Everyone agrees that sensitive data should not flow into the wrong place. Everyone agrees that AI usage needs oversight, evidence and control.

But agreement is not a control.

A policy PDF is not a control.

A steering committee is not a control.

At some point, AI governance has to become operational.

In Microsoft 365, that usually means looking at Microsoft Purview. Not because Purview magically makes an organization compliant with the EU AI Act. It does not. But because many AI Act readiness conversations eventually depend on the same practical questions Purview is built to help answer.

The foundational questions

What data do we have?

How sensitive is it?

Who can access it?

Was protection applied?

What did users and systems do?

Can we investigate?

Can we show evidence?

Those are not theoretical questions. They are the foundation of a defensible AI rollout.



EU AI Act Application Timeline – Aug 2026, Dec 2027, Aug 2028



The date matters, but not in the way many posts make it sound

The EU AI Act has several application dates. 2 August 2026 remains an important milestone, but it should not be presented as a single universal enforcement cliff for every AI obligation.

In May 2026, the Council and Parliament reached a provisional agreement that would delay the application of certain High Risk AI rules to 2 December 2027 for standalone High Risk AI systems and to 2 August 2028 for High Risk AI systems embedded in products. That agreement still needs formal adoption, but it changes the tone of any serious readiness discussion.

Application timeline (provisional)

Aug 2026
General milestone; prohibited AI and general-purpose AI model provisions
Dec 2027
Standalone High-Risk AI systems (Annex III) — pending formal adoption
Aug 2028
High-Risk AI embedded in regulated products (Annex I) — pending formal adoption

So the question is not simply: Are we compliant by August?

Are we building the controls now that we will need anyway?

Because whether a specific obligation lands in 2026, 2027 or 2028, most enterprises are already using AI in Microsoft 365. Copilot is being rolled out. Agents are being explored. AI apps are being tested. Data is being summarized, searched, grounded, reused and shared.

The regulatory calendar may shift.

Your data exposure does not wait.



Penalties get attention. Evidence wins the discussion.

Penalty framework  ·  EU AI Act Art. 99

Up to €35 million or 7% of worldwide annual turnover

Highest infringement category (prohibited practices). Other violations, including most High-Risk AI failures: up to €15 million or 3%.

That number is useful for attention. It is not useful as an implementation plan.

The practical work starts somewhere else. It starts with the ability to show that AI usage is not happening in a fog. For High Risk AI systems, the European Commission describes obligations around risk management, data quality, technical documentation, traceability, transparency, human oversight, accuracy, cybersecurity and robustness. Deployers of High Risk AI systems also need to monitor operation, assign human oversight and ensure input data is relevant and representative for the intended purpose.

What this means in practice

You need classified data.

You need access control.

You need traceability.

You need monitoring.

You need evidence.

You need a way to connect technical controls to regulatory requirements.

That is exactly where Microsoft Purview becomes relevant.



01

Classify and protect the data before AI finds it

AI does not create bad permissions. It exposes them.

Microsoft 365 Copilot and agents do not need a dramatic data breach scenario to create risk. Sometimes they only need a badly governed SharePoint site, an overshared folder, a forgotten Teams workspace or a confidential file that was never classified.

Sensitivity Labels are not the entire answer to AI governance. But they are one of the most important building blocks in Microsoft 365. Microsoft describes Sensitivity Labels as a way to classify and protect organizational data while keeping collaboration possible. Labels can identify the sensitivity of data, enforce protection settings and generate usage reports and activity data.

For AI usage, this matters because Microsoft 365 Copilot works with Purview Sensitivity Labels and encryption to enforce access controls and protection settings during grounding and content generation. Copilot can only summarize or reference content the user is authorized to access, and protection settings remain enforced even when labeled files are stored outside the Microsoft 365 tenant.

That makes labels more than a visual sticker. A mature label model can help answer questions like:

Which data is public, internal, confidential or highly confidential?

Which content should be encrypted?

Which users should be able to extract or reuse protected information?

Which data should trigger DLP, Insider Risk or review workflows?

Which content is too sensitive for broad AI interaction?

The action is not simply „turn on labels“.

Build a label taxonomy the business understands. Pilot it with real use cases. Define protection logic. Test user experience. Expand across workloads where supported.

For Azure AI Foundry scenarios, Microsoft documents Purview support for capabilities such as data classification, Sensitivity Labels, DLP, Audit, Insider Risk Management, Communication Compliance, eDiscovery, Data Lifecycle Management and Compliance Manager. But those capabilities require the relevant Purview Data Security configuration for Foundry, and some policy management scenarios require pay-as-you-go billing.

„We use Purview“ is not a control.

Configured policies are controls.

Tested scenarios are controls.

Documented decisions are controls.



02

Build traceability before the investigation starts

Audit is boring until it is the only thing that matters.

When an AI-related incident happens, the first question is rarely philosophical. It is practical:

Who accessed the data?

Which system was used?

What interaction happened?

Was sensitive information involved?

Can we reconstruct the sequence?

Can we prove what happened?

Purview Audit is one of the core building blocks for that answer. Microsoft describes it as a set of tools for searching and managing audit records across Microsoft services, supporting security events, forensic investigations, internal investigations and compliance obligations.

For Microsoft 365 Copilot, Microsoft documents that Copilot interaction data is part of the compliance and auditing architecture, alongside the enforcement of existing Microsoft 365 protections.

But this needs careful wording. The EU AI Act does not say that every organization must keep every AI-related log for ten years. Some High Risk deployer obligations refer to retaining logs generated by the AI system for at least six months, where such logs are under the deployer’s control.

Not the right recommendation:

Enable ten-year retention for everything.

The right recommendation:

Define your retention strategy based on regulatory, investigative and business requirements. Start by making sure auditing is enabled, roles are assigned correctly, audit search works, Copilot and agent activities are understood, and investigation procedures are documented.

The same applies to Insider Risk Management. Insider Risk can help detect risky activity patterns involving sensitive data. Sensitivity Labels can serve as conditions that trigger DLP and Insider Risk policies.

But Insider Risk is not „human oversight“ in the AI Act sense. Human oversight means that people are assigned, enabled and equipped to supervise the relevant AI system where required. Purview can provide signals, cases, alerts and evidence. It does not replace the governance model, the escalation path or the accountable human role.

Purview helps you see and investigate.

Your organization still needs to decide who acts.



03

Map controls to accountability, not vibes

Many AI governance programs fail because everything is scattered.

Legal has requirements.

Security has tools.

IT has settings.

The business has use cases.

Compliance has spreadsheets.

Nobody has one connected view.

This is where Compliance Manager becomes useful. Microsoft states that Compliance Manager includes pre-built assessments for AI regulations and frameworks such as the EU AI Act, ISO/IEC 23894, ISO/IEC 42001 and NIST AI RMF. These assessments are designed to benchmark compliance over time, report control status and maintain evidence for both Microsoft and customer activities supporting the compliance program.

That is valuable because AI governance needs more than isolated technical measures.

What Compliance Manager provides

Owners.

Evidence.

Status.

Gaps.

Improvement actions.

A way to show progress without pretending that every control is already perfect.

The Compliance Manager EU AI Act assessment is not:

A legal opinion. A certification. Proof of compliance.

A substitute for determining whether your AI system is prohibited, limited-risk, general-purpose, High-Risk or outside scope.

What it is:

A strong operational baseline. A common language. A connection between controls and requirements. A way to move from „we should do something about AI governance“ to „this requirement has an owner, this evidence exists, this gap is open and this action is due.“

That is a very different level of maturity.




A practical first month

Not a 60-page governance manifesto. Four weeks of actual work.

Week 1

Identify AI use cases and data exposure

List where Microsoft 365 Copilot, custom agents, Azure AI Foundry apps and other enterprise AI tools are being used or planned. Identify which data sources they can reach. Pay special attention to SharePoint, OneDrive, Teams, Exchange, Fabric and any connected repositories.

Week 2

Review labels and protection

Check whether the current Sensitivity Label taxonomy is understandable, usable and technically meaningful. Validate encryption, access rights, external sharing, default labels, auto-labeling candidates and user guidance.

Week 3

Validate audit and investigation readiness

Confirm that relevant audit events are available, searchable and retained according to your requirements. Review roles, access to audit search, eDiscovery readiness, Insider Risk signals, DLP alerts and investigation procedures.

Week 4

Start the Compliance Manager baseline

Activate or review the EU AI Act assessment. Map existing controls, assign owners, collect evidence and document the first gaps. Do not aim for perfect. Aim for visible, structured progress.

This is not the entire AI Act journey. But it produces something real. Not just awareness. Not just principles. Not just another slide saying „responsible AI.“ Actual controls.



The real point

The EU AI Act should not push organizations into panic. But it should end the comfortable phase where AI governance is treated as a future workshop topic.

In Microsoft 365, many of the relevant foundations already exist:

Sensitivity Labels help classify and protect data.

DLP and Insider Risk can use that context to detect and respond to risky behavior.

Audit and eDiscovery help reconstruct what happened.

DSPM for AI can surface exposure and AI usage patterns.

Compliance Manager can structure the control mapping and evidence trail.

None of this makes compliance automatic. But without these foundations, AI governance remains a promise. And promises are hard to audit.

While the legal team reads the regulation

Classify the data.

Protect what matters.

Log what happens.

Prepare the evidence.

Assign the owners.

That is how you move from regulatory overload to operational control.

Autor

  • Julian Kusenberg

    Julian Kusenberg ist Senior Consultant bei SoftwareOne und unterstützt Unternehmen bei der Implementierung von Microsoft Purview, insbesondere in den Bereichen Information Governance, Datenschutz und Insider Risk Management. Mit langjähriger Erfahrung in der Umsetzung von Compliance- und Datenschutzlösungen hilft er Organisationen, regulatorische Anforderungen in Microsoft-365-Umgebungen effizient zu erfüllen. Seine Expertise umfasst komplex eDiscovery- und Forensikprojekte, bei denen er technisches Know-how mit strategischer Beratung kombiniert.

Kommentare

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

About Julian

Microsoft MVP für Purview, Senior Consultant bei SoftwareOne und Speaker rund um Microsoft 365 Compliance, eDiscovery, Insider Risk Management, Data Security und AI Governance.


Topics

  • Microsoft Purview
  • eDiscovery
  • Insider Risk Management
  • DLP und Data Security
  • Copilot und AI Governance