This report provides a detailed breakdown of ai privacy compliance guide for managed service providers.
The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.
Who should use this: Security teams, compliance officers, and MSP owners managing risk
How often: Weekly for security posture, monthly for compliance reporting, on-demand for audits
You want to use AI to query your business data through Power BI. But your clients' data flows through that AI. This guide covers the legal obligations, hosting options, and practical steps so you can make an informed decision.
You’re an MSP. Your business systems (PSA, RMM, CRM) contain ticket data, client names, employee names, device details, contract values, and operational metrics for every customer you serve. This is personal data under GDPR, and it belongs to your clients, not to you.
When you connect an AI agent to query that data, every question you ask sends information to the AI provider’s servers. That makes the AI provider a sub-processor under data protection law. And that triggers a chain of legal obligations under Article 28 GDPR you cannot ignore.
Your client (data controller) → You, the MSP (data processor) → The AI provider (sub-processor). Each link in this chain carries legal obligations. If any link breaks, you are liable.
Proxuma Power BI gives you a structured semantic model with clearly labeled measures, tables, and relationships across your entire tool stack. This is what makes AI queries possible in the first place: the data is structured and labeled, so an AI can write precise DAX queries instead of fumbling through raw APIs. But the AI still needs to see the query results. And that’s where the privacy question begins.
Under GDPR, MSPs are data processors. Your clients are data controllers. When you introduce an AI provider, they become a sub-processor. Article 28(2) GDPR requires prior written authorization from the controller before engaging any sub-processor.
Specific authorization. The client must approve each individual sub-processor. Any changes require fresh approval. Maximum control, maximum friction.
General authorization. The client gives broad pre-approval. You must maintain a sub-processor list, notify clients of changes, and give them a reasonable window (typically 30 days) to object. This is what most MSPs use in practice.
In either case: you must update your Data Processing Agreements before connecting any AI to client data. If your DPA was written before AI tools existed (most were), it almost certainly does not cover this use case. The EDPB’s guidelines on Article 28 provide detailed guidance on what DPAs must contain.
In October 2024, the European Data Protection Board adopted Opinion 22/2024, specifically addressing processor and sub-processor chains. Key takeaway: processors remain fully liable to controllers for sub-processor performance. Even under general authorization, specific information about each sub-processor must be provided to the controller.
This is the critical question. Different AI providers handle your data very differently. Here’s the comparison that matters:
| Provider | Trains on your data? | EU data residency | DPA available | DPF certified |
|---|---|---|---|---|
| Azure OpenAI | No | Yes — EU Data Zone | Yes | Yes |
| OpenAI API | No (API) | No | Yes + SCCs | Yes |
| Anthropic Claude | No (API) | Via AWS/GCP only | Yes + SCCs | Unclear |
| AWS Bedrock | No | Yes — Frankfurt, Ireland, Paris, Stockholm | Yes | Yes |
| Google Vertex AI | No | Yes — EU regions | Yes | Yes |
| Self-hosted (Llama, Mistral) | No — you control it | Wherever you host it | N/A | N/A |
API usage ≠ consumer usage. When you use ChatGPT via the website, OpenAI may use your input for training. When you use the API, they don’t. The same applies to Claude. Always use the commercial API, never consumer products, for client data.
Each hosting model trades off between capability, privacy, cost, and compliance burden. Here’s what each means for your MSP in practice.
GPT models hosted in Microsoft’s EU data centers. Your data stays in the EU Data Zone. No training on your data. Private endpoints available. Already in your Microsoft ecosystem.
Claude, Llama, and others via AWS infrastructure. EU regions: Frankfurt, Ireland, Paris, Stockholm. Full AWS compliance stack. Data never leaves your chosen region.
Gemini and Claude models. EU regions available. Zero data retention policy. Strong compliance portfolio. Ideal if you’re already in the Google Cloud ecosystem.
Azure OpenAI with EU Data Zone hits the sweet spot. You’re already paying for Microsoft 365 and Azure. Your Power BI data is already in the Microsoft ecosystem. EU data residency is built in. SOC 2, ISO 27001, GDPR: it’s all covered. For Anglosphere MSPs without EU clients, the direct API is equally viable.
If your AI provider is US-based (and most are), sending EU client data to their servers is an international data transfer under GDPR Chapter V (Articles 44-49). The legal mechanism for this has been invalidated twice before, and may be challenged again.
The EU-US Data Privacy Framework survived its first legal challenge in September 2025. But Philippe Latombe’s appeal was filed October 31, 2025, escalating to the European Court of Justice. The ECJ invalidated both previous frameworks (Safe Harbor — Schrems I and Privacy Shield — Schrems II).
Don’t rely solely on the DPF. Maintain Standard Contractual Clauses (SCCs) as a backup. Conduct a Transfer Impact Assessment. If the DPF falls, you’ll need SCCs with supplementary measures immediately.
The simplest way to avoid this entire issue: choose an AI provider with EU data residency. Azure OpenAI with the EU Data Zone, AWS Bedrock in Frankfurt, or Google Vertex AI in an EU region. If your data never leaves the EU, the transfer question doesn’t arise.
MSPs in different countries face different (but overlapping) requirements. Here’s the quick reference:
| Jurisdiction | Key law | AI-specific notes |
|---|---|---|
| Netherlands, Belgium, Luxembourg | GDPR + national implementations | Full GDPR applies. EU AI Act (full applicability August 2, 2026). Dutch DPA (Autoriteit Persoonsgegevens) is active on AI enforcement. Belgian DPA: GBA. Prefer EU-resident AI providers. |
| United Kingdom | UK GDPR + Data Protection Act 2018 | Essentially mirrors EU GDPR. UK has its own adequacy decisions. UK-US data bridge exists separately from EU-US DPF. The ICO published AI-specific guidance. |
| United States | State laws (CCPA/CPRA, etc.) | No federal privacy law. CCPA/CPRA applies to California residents’ data. 15+ states have privacy laws. For US-only MSPs with US-only clients, compliance burden is lighter, but growing. |
| Canada | PIPEDA + provincial laws | Consent-based framework. The proposed CPPA (Consumer Privacy Protection Act) would strengthen AI obligations. Quebec’s Law 25 already imposes strict requirements. |
| Australia | Privacy Act 1988 + APPs | Australian Privacy Principles apply. Cross-border disclosure rules (APP 8) require reasonable steps to ensure overseas recipients comply. No AI-specific legislation yet. |
| New Zealand | Privacy Act 2020 | 12 Information Privacy Principles. Cross-border transfer rules require adequate protections. NZ has EU adequacy status, simplifying data flows. DPA: Office of the Privacy Commissioner. |
Starting January 2026, major insurers began applying AI exclusion endorsements to policies. Verisk (the industry standard-setter for insurance forms) released new AI-specific exclusion language that carriers are adopting rapidly. This directly affects MSPs.
If AI suggests incorrect remediation and causes a client outage, the MSP is liable. AI providers universally disclaim output accuracy. Your E&O may not cover this. See the NIST AI Risk Management Framework for risk assessment guidance.
Standard cyber policies no longer cover deepfake fraud losses as of January 2026. Verisk’s new endorsements explicitly exclude generative AI exposures. The FBI has issued guidance on deepfake threats.
New exclusions cover not just your own AI, but third-party AI platforms used in operations. Using OpenAI or Claude in your workflow may trigger exclusions.
Failing to disclose AI usage to your insurer could void coverage entirely. Proactive disclosure is essential, even if it affects premiums.
Review your current cyber and E&O policies for AI exclusion language before your next renewal. Ask your broker specifically about AI endorsements. Document your AI governance. Insurers increasingly tie premiums to demonstrable AI governance maturity.
GDPR Article 5(1)(c) requires data to be “limited to what is necessary.” The EDPB Opinion 28/2024 on AI models and personal data further clarifies how this principle applies to AI processing. Every API call to an AI service is a processing operation. The less personal data you send, the lower your compliance burden.
Strip PII before sending to AI. Remove names, emails, phone numbers, IP addresses. Replace with ticket IDs, anonymized references (“User-A”, “Client-7”), device categories.
For reporting and trend analysis, send aggregated data. Instead of 500 individual tickets, send “87 password reset requests across 12 clients.” This may qualify as anonymization, removing GDPR applicability entirely.
Replace identifiers with pseudonyms before AI processing. Maintain a mapping table locally (never sent to the AI). Note: pseudonymized data is still personal data under GDPR, but it’s a recognized safeguard under Article 32.
Run models like Llama 3 or Mistral on your own hardware. Zero data leaves your network. Trade-off: lower model quality, GPU hardware costs, no vendor support. Best for highly sensitive environments.
Where Proxuma Power BI fits: Because the data is already structured in a semantic model with clear measures and dimensions, you can query aggregated metrics (total hours, ticket counts, revenue per service line) rather than raw record-level data. This naturally aligns with Level 2 data minimization. The AI sees numbers and categories, not individual names and ticket descriptions.
Before connecting any AI to your client data, work through this list. Items are ordered by priority.
Under GDPR’s general authorization model, you must formally notify clients before introducing a new sub-processor. Here’s a template you can adapt:
Subject: Notice of New Sub-Processor — AI-Assisted Service Management Dear [Client Name], As part of our continuous improvement of service delivery, we are writing to inform you of our intention to engage [AI Provider Name] as a sub-processor under our existing Data Processing Agreement dated [Date]. PURPOSE: [AI Provider] will be used for [specific purpose, e.g., "automated ticket classification, reporting analytics, and service trend analysis"] to improve response times and service quality. DATA PROCESSED: The following categories may be processed: - Aggregated service metrics and ticket statistics - Device categories and configuration summaries - Service performance indicators We do NOT send the following to this sub-processor: - Passwords or authentication credentials - Financial data (banking, credit card information) - Individual employee personal details SAFEGUARDS IN PLACE: - Data encrypted in transit (TLS 1.2+) and at rest - [Provider] maintains SOC 2 Type II / ISO 27001 certification - Data processed within [EU/under EU-US Data Privacy Framework with Standard Contractual Clauses] - [Provider] does not use customer data for model training YOUR RIGHT TO OBJECT: In accordance with Section [X] of our Data Processing Agreement, you have 30 days from receipt of this notice to object. Please direct objections in writing to [email]. Kind regards, [MSP Name] [Compliance Officer / DPO]
The EU AI Act (Regulation 2024/1689) reaches full applicability on August 2, 2026. Even before then, some obligations already apply. The AI Act Explorer provides a navigable version of the full text.
AI literacy obligations (Article 4). MSPs must ensure staff understands the AI systems they deploy. This includes knowing what data flows where, what the limitations are, and when human oversight is needed.
Full applicability. Most MSP AI use cases (ticket processing, reporting) won’t be “high-risk.” But AI used for workforce management or access control decisions could trigger high-risk obligations with additional documentation requirements.
Regardless of risk classification, transparency is required (Article 50): if clients interact with AI-generated content, they must be informed.
Using AI without updated client agreements is a contractual breach, regardless of GDPR compliance. This is the single most important step.
Azure OpenAI (EU Data Zone), AWS Bedrock (Frankfurt), or Google Vertex (EU) keep data in the EU. No international transfer, no Schrems III risk, no TIA required.
Verisk’s AI exclusion forms took effect January 2026. Don’t assume you’re covered. Disclose AI usage and document your governance.
Because Proxuma Power BI structures your operational data into labeled measures and dimensions, AI can query aggregated metrics instead of raw records. Less personal data to the AI means less compliance burden for you.
ChatGPT.com, Claude.ai, Gemini: these consumer interfaces may use your input for training. Always use commercial APIs with a signed DPA. Establish clear staff policies.
All claims in this guide are supported by the official sources below. Links verified February 2026.
GDPR full text — gdpr-info.eu (interlinked, with recitals)
EU AI Act (Regulation 2024/1689) — EUR-Lex official publication
Standard Contractual Clauses — European Commission
EU Adequacy Decisions — European Commission
European Data Protection Board (EDPB) — EU-level guidance and opinions
Autoriteit Persoonsgegevens — Dutch Data Protection Authority
Information Commissioner’s Office (ICO) — UK Data Protection Authority
CNIL — French Data Protection Authority (TIA guidance)
OAIC — Office of the Australian Information Commissioner
Office of the Privacy Commissioner — New Zealand
| EDPB Opinions & Guidelines | Topic |
|---|---|
| Opinion 22/2024 | Processor and sub-processor chains under Article 28 |
| Opinion 28/2024 | AI models and personal data — data minimization in AI |
| Guidelines 02/2023 | Article 28 and the notion of processor |
| AI Provider | DPA | Privacy | DPF Status |
|---|---|---|---|
| Microsoft (Azure OpenAI) | DPA | Data privacy | Certified |
| OpenAI | DPA | Enterprise privacy | Certified |
| Anthropic (Claude) | DPA | Privacy center | — |
| AWS (Bedrock) | DPA | Security & compliance | Certified |
| Google (Vertex AI) | DPA | Data governance | Certified |
Additional resources:
• EU-US Data Privacy Framework — US Department of Commerce
• UK-US Data Bridge — UK Government
• ICO AI Guidance — UK Information Commissioner’s Office
• NIST AI — US National Institute of Standards and Technology
• CNIL Transfer Impact Assessment Guide — French DPA
• Court of Justice of the EU (CJEU) — Schrems I & II case law
• California Consumer Privacy Act (CCPA) — CA Attorney General
• PIPEDA — Canadian federal privacy law
Proxuma Power BI gives you the structured data layer that makes AI queries possible, with the labeling and organization that keeps your compliance burden low.
View AI-Powered Reports Get Started with Power BIConnect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.
See more reports Get started