top of page

Take control of AI in business: From strategy to technical assurance with Purview DLP

  • Writer: Bjørnar Aassveen
    Bjørnar Aassveen
  • Sep 23
  • 4 min read

AI tools have quickly become a natural part of everyday life for many employees. They provide increased productivity and new opportunities, but also new risks. Without clear guidelines and technical controls, sensitive data can end up in unauthorized services, which can lead to GDPR violations, loss of trade secrets and non-compliance with upcoming regulations such as the EU AI Act.


Before you get started: Find your AI strategy

Before you set up technical barriers like DLP policies, it’s crucial to have a clear understanding of how your business will use AI. This is a picture that can change continuously as technology moves faster than management’s sneakers, but you have to start somewhere. This is not just about which tools are approved, but also about what data can be shared, what processes can be automated, and what risks need to be managed.


As a good start:

  • Map which AI services are in use today (both approved and unauthorized). (Here you have good tools like DSPM for AI and DLP policies in simulation mode)

  • Define which services should be allowed, and what they can be used for.

  • Establish guidelines for the use of AI, including employee training.

  • Anchor it in an AI strategy that supports your business goals, while also ensuring security and compliance.


The EU’s AI Act, which will enter into force gradually from 2026, sets clear requirements for the control and governance of AI systems. The regulations are based on a risk-based approach, where high-risk AI systems must meet strict requirements for transparency, traceability and accountability. Businesses must be able to document which AI tools are used and how they are handled. Violations can result in fines of up to 7% of global turnover or €35 million, which is higher than the GDPR levels.


In short: Having an overview and control of AI use is not just good practice, it is required by law.

Further in this post I show how you can set up a DLP policy in Microsoft Purview that blocks file uploads to unauthorized AI tools.

This builds on my post about the Purview Browser Extension, which is a prerequisite for this to work in the browser.


Important! And worth repeating:

Blocking file uploads to unauthorized AI tools is not just a technical control, it is a critical part of your organization’s security and compliance strategy, and will also be required in the EU AI Act.

Several AI vendors use uploaded data to train and improve their models. This can result in sensitive information becoming part of a model that you have no control over.


  • When you upload data to an AI service, the data may be stored on servers outside the EU, which may violate GDPR and your organization’s own data retention policies.

  • Unauthorized AI tools (like any other unknown IT tool where you use internal data) usually do not have a data processing agreement with your organization. This means that you have no legal basis to share data with them.

  • Using unauthorized AI tools outside of your organization’s control creates a confusing risk picture and makes it difficult to implement a comprehensive security strategy.


Wouldn't it be boring if all the data you fed into an AI service ended up with the competitor directly? 🕵️

ree



Aassveen AS has decided in its AI strategy to use Copilot as the only approved AI tool. (Wink wink Microsoft). After a good round of anchoring and information, I have now tightened the rules for the use of AI services by using DLP rules in Purview.


DLP policy setup


1. Go to Microsoft Purview

Navigate to Data Loss Prevention → Policies → Create policy.

ree

2. Choose what type of data to protect

→ Data stored in connected sources → Give the policy a name, for example "AI regulation"


3. Define scope

→ Devices → scope it to a group of devices or users if necessary

ree

4. Define conditions

→ In my case, the condition for the rule to match is that it is one of the defined file types below:

ree

5. Set up actions

→ Define which actions should occur when the conditions are met. In my example below, I have chosen to block all uploads and copy/paste to browsers within the domain restrictions.

ree

Here I have chosen to use a standard group from Microsoft "Generative AI websites", this is a standard group that cannot be changed but is maintained and updated by Microsoft.https://learn.microsoft.com/en-us/purview/ai-microsoft-purview-supported-sites


ree

If you want to base the policy on your custom group, this can be set up here:

→ Settings → Data Loss Prevention → Endpoint DLP settings → Browser and domain restrictions to sensitive data.

ree

Once the policy is set up, you have blocked file uploads to a number of AI services.


Tips:

•              Start with Audit mode to see the scope before going to Block mode.

•              If you choose to use your own group for pages, remember to update it regularly!



In summary: AI is not magic, it is a new type of risk with glitter on it (Greetings Bjørnar 33 with a silver foil hat🥸)


Releasing unregulated AI into business without control is like giving the keys to your house to a stranger because he has a cool robot voice, who can write poems and funny lyrics on the fly.


  • Find your AI strategy: Which tools are legal? What should they be used for? And what is absolutely forbidden?

  • Remember the EU AI Act: Soon it will not only be a good idea to have control, it will be required by law. And the fines? They make GDPR look like small change.

  • Don't blindly trust AI services: Many "free" tools are actually data guzzlers that train models on your files, and who knows where they end up? Maybe at your competitor's.


Bjørnar&(regulated)AI

 
 
 

Comments


bottom of page