“Ai Privacy Compliance Guide”
Autotask PSA Datto RMM Datto Backup Microsoft 365 SmileBack HubSpot IT Glue All reports
AI-GENERATED REPORT
You searched for:

Ai Privacy Compliance Guide

Built from: Editorial
How this report was made
1
Autotask PSA
Multiple data sources combined
2
Proxuma Power BI
Pre-built MSP semantic model, 50+ measures
3
AI via MCP
Claude or ChatGPT writes DAX queries, executes them, formats output
4
This Report
KPIs, breakdowns, trends, recommendations
Ready in < 15 min

Ai Privacy Compliance Guide

This report provides a detailed breakdown of ai privacy compliance guide for managed service providers.

The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.

Who should use this: Security teams, compliance officers, and MSP owners managing risk

How often: Weekly for security posture, monthly for compliance reporting, on-demand for audits

Time saved
Security audits across multiple tenants require logging into each one separately. This report aggregates it.
Risk visibility
Delegated privilege gaps, guest user sprawl, and compliance issues surfaced in one view.
Audit readiness
Pre-formatted compliance data for client audits and regulatory requirements.
Report categorySecurity & Compliance
Data sourceAutotask PSA · Datto RMM · Datto Backup · Microsoft 365 · SmileBack · HubSpot · IT Glue
RefreshReal-time via Power BI
Generation timeUnder 15 minutes
AI requiredClaude, ChatGPT or Copilot
AudienceSecurity teams, compliance officers
Where to find this in Proxuma
Power BI › Security › Ai Privacy Compliance Guide
What you can measure in this report
Privacy & Compliance
AI + Data Protection

Using AI with Client Data: A Privacy & Compliance Guide for MSPs

You want to use AI to query your business data through Power BI. But your clients' data flows through that AI. This guide covers the legal obligations, hosting options, and practical steps so you can make an informed decision.

Power BI Report
You searched for:

proxuma-io-ai-privacy-compliance-guide-post.html

Built from: Autotask PSA N-able RMM HubSpot CRM IT Glue SmileBack

Why this matters for every MSP

You’re an MSP. Your business systems (PSA, RMM, CRM) contain ticket data, client names, employee names, device details, contract values, and operational metrics for every customer you serve. This is personal data under GDPR, and it belongs to your clients, not to you.

When you connect an AI agent to query that data, every question you ask sends information to the AI provider’s servers. That makes the AI provider a sub-processor under data protection law. And that triggers a chain of legal obligations under Article 28 GDPR you cannot ignore.

The chain of responsibility

Your client (data controller) → You, the MSP (data processor) → The AI provider (sub-processor). Each link in this chain carries legal obligations. If any link breaks, you are liable.

Proxuma Power BI gives you a structured semantic model with clearly labeled measures, tables, and relationships across your entire tool stack. This is what makes AI queries possible in the first place: the data is structured and labeled, so an AI can write precise DAX queries instead of fumbling through raw APIs. But the AI still needs to see the query results. And that’s where the privacy question begins.


Your obligations under GDPR (Article 28)

Under GDPR, MSPs are data processors. Your clients are data controllers. When you introduce an AI provider, they become a sub-processor. Article 28(2) GDPR requires prior written authorization from the controller before engaging any sub-processor.

Two authorization models

1

Specific authorization. The client must approve each individual sub-processor. Any changes require fresh approval. Maximum control, maximum friction.

2

General authorization. The client gives broad pre-approval. You must maintain a sub-processor list, notify clients of changes, and give them a reasonable window (typically 30 days) to object. This is what most MSPs use in practice.

In either case: you must update your Data Processing Agreements before connecting any AI to client data. If your DPA was written before AI tools existed (most were), it almost certainly does not cover this use case. The EDPB’s guidelines on Article 28 provide detailed guidance on what DPAs must contain.

EDPB Opinion 22/2024

In October 2024, the European Data Protection Board adopted Opinion 22/2024, specifically addressing processor and sub-processor chains. Key takeaway: processors remain fully liable to controllers for sub-processor performance. Even under general authorization, specific information about each sub-processor must be provided to the controller.


Where does your data actually go?

This is the critical question. Different AI providers handle your data very differently. Here’s the comparison that matters:

Provider Trains on your data? EU data residency DPA available DPF certified
Azure OpenAI No Yes — EU Data Zone Yes Yes
OpenAI API No (API) No Yes + SCCs Yes
Anthropic Claude No (API) Via AWS/GCP only Yes + SCCs Unclear
AWS Bedrock No Yes — Frankfurt, Ireland, Paris, Stockholm Yes Yes
Google Vertex AI No Yes — EU regions Yes Yes
Self-hosted (Llama, Mistral) No — you control it Wherever you host it N/A N/A
Critical distinction

API usage ≠ consumer usage. When you use ChatGPT via the website, OpenAI may use your input for training. When you use the API, they don’t. The same applies to Claude. Always use the commercial API, never consumer products, for client data.


Five paths to AI, compared

Each hosting model trades off between capability, privacy, cost, and compliance burden. Here’s what each means for your MSP in practice.

Azure OpenAI

GPT models hosted in Microsoft’s EU data centers. Your data stays in the EU Data Zone. No training on your data. Private endpoints available. Already in your Microsoft ecosystem.

EU residency SOC 2 + ISO 27001

AWS Bedrock

Claude, Llama, and others via AWS infrastructure. EU regions: Frankfurt, Ireland, Paris, Stockholm. Full AWS compliance stack. Data never leaves your chosen region.

EU residency SOC 1/2/3 + ISO

Google Vertex AI

Gemini and Claude models. EU regions available. Zero data retention policy. Strong compliance portfolio. Ideal if you’re already in the Google Cloud ecosystem.

EU residency SOC 2 + HIPAA

Direct API (OpenAI/Anthropic)

Simplest integration. Good DPAs available. But: data processed in the US. Requires SCCs + Transfer Impact Assessment for EU MSPs. Adequate for Anglosphere MSPs.

US processing DPA + SCCs

Self-hosted (Ollama/vLLM)

Models like Llama 3, Mistral, Phi run on your own hardware. Zero data leaves your network. But: lower quality than frontier models, requires GPU hardware, no vendor support.

Full control Lower quality
The pragmatic choice for most MSPs

Azure OpenAI with EU Data Zone hits the sweet spot. You’re already paying for Microsoft 365 and Azure. Your Power BI data is already in the Microsoft ecosystem. EU data residency is built in. SOC 2, ISO 27001, GDPR: it’s all covered. For Anglosphere MSPs without EU clients, the direct API is equally viable.


The EU-US data transfer question (and Schrems III)

If your AI provider is US-based (and most are), sending EU client data to their servers is an international data transfer under GDPR Chapter V (Articles 44-49). The legal mechanism for this has been invalidated twice before, and may be challenged again.

Current status: DPF under review

The EU-US Data Privacy Framework survived its first legal challenge in September 2025. But Philippe Latombe’s appeal was filed October 31, 2025, escalating to the European Court of Justice. The ECJ invalidated both previous frameworks (Safe Harbor — Schrems I and Privacy Shield — Schrems II).

What to do: belt and suspenders

Don’t rely solely on the DPF. Maintain Standard Contractual Clauses (SCCs) as a backup. Conduct a Transfer Impact Assessment. If the DPF falls, you’ll need SCCs with supplementary measures immediately.

The simplest way to avoid this entire issue: choose an AI provider with EU data residency. Azure OpenAI with the EU Data Zone, AWS Bedrock in Frankfurt, or Google Vertex AI in an EU region. If your data never leaves the EU, the transfer question doesn’t arise.


What applies where

MSPs in different countries face different (but overlapping) requirements. Here’s the quick reference:

Jurisdiction Key law AI-specific notes
Netherlands, Belgium, Luxembourg GDPR + national implementations Full GDPR applies. EU AI Act (full applicability August 2, 2026). Dutch DPA (Autoriteit Persoonsgegevens) is active on AI enforcement. Belgian DPA: GBA. Prefer EU-resident AI providers.
United Kingdom UK GDPR + Data Protection Act 2018 Essentially mirrors EU GDPR. UK has its own adequacy decisions. UK-US data bridge exists separately from EU-US DPF. The ICO published AI-specific guidance.
United States State laws (CCPA/CPRA, etc.) No federal privacy law. CCPA/CPRA applies to California residents’ data. 15+ states have privacy laws. For US-only MSPs with US-only clients, compliance burden is lighter, but growing.
Canada PIPEDA + provincial laws Consent-based framework. The proposed CPPA (Consumer Privacy Protection Act) would strengthen AI obligations. Quebec’s Law 25 already imposes strict requirements.
Australia Privacy Act 1988 + APPs Australian Privacy Principles apply. Cross-border disclosure rules (APP 8) require reasonable steps to ensure overseas recipients comply. No AI-specific legislation yet.
New Zealand Privacy Act 2020 12 Information Privacy Principles. Cross-border transfer rules require adequate protections. NZ has EU adequacy status, simplifying data flows. DPA: Office of the Privacy Commissioner.

Cyber insurance and AI: the exclusion wave

Starting January 2026, major insurers began applying AI exclusion endorsements to policies. Verisk (the industry standard-setter for insurance forms) released new AI-specific exclusion language that carriers are adopting rapidly. This directly affects MSPs.

AI output errors

If AI suggests incorrect remediation and causes a client outage, the MSP is liable. AI providers universally disclaim output accuracy. Your E&O may not cover this. See the NIST AI Risk Management Framework for risk assessment guidance.

Deepfake fraud

Standard cyber policies no longer cover deepfake fraud losses as of January 2026. Verisk’s new endorsements explicitly exclude generative AI exposures. The FBI has issued guidance on deepfake threats.

Third-party AI platforms

New exclusions cover not just your own AI, but third-party AI platforms used in operations. Using OpenAI or Claude in your workflow may trigger exclusions.

Non-disclosure risk

Failing to disclose AI usage to your insurer could void coverage entirely. Proactive disclosure is essential, even if it affects premiums.

Action required

Review your current cyber and E&O policies for AI exclusion language before your next renewal. Ask your broker specifically about AI endorsements. Document your AI governance. Insurers increasingly tie premiums to demonstrable AI governance maturity.


What data should you actually send to the AI?

GDPR Article 5(1)(c) requires data to be “limited to what is necessary.” The EDPB Opinion 28/2024 on AI models and personal data further clarifies how this principle applies to AI processing. Every API call to an AI service is a processing operation. The less personal data you send, the lower your compliance burden.

Level 1: Field-level filtering (minimum)

Strip PII before sending to AI. Remove names, emails, phone numbers, IP addresses. Replace with ticket IDs, anonymized references (“User-A”, “Client-7”), device categories.

Level 2: Aggregation (recommended for analytics)

For reporting and trend analysis, send aggregated data. Instead of 500 individual tickets, send “87 password reset requests across 12 clients.” This may qualify as anonymization, removing GDPR applicability entirely.

Level 3: Pseudonymization (good compromise)

Replace identifiers with pseudonyms before AI processing. Maintain a mapping table locally (never sent to the AI). Note: pseudonymized data is still personal data under GDPR, but it’s a recognized safeguard under Article 32.

Level 4: On-premise AI (maximum privacy)

Run models like Llama 3 or Mistral on your own hardware. Zero data leaves your network. Trade-off: lower model quality, GPU hardware costs, no vendor support. Best for highly sensitive environments.

Where Proxuma Power BI fits: Because the data is already structured in a semantic model with clear measures and dimensions, you can query aggregated metrics (total hours, ticket counts, revenue per service line) rather than raw record-level data. This naturally aligns with Level 2 data minimization. The AI sees numbers and categories, not individual names and ticket descriptions.


The MSP AI compliance checklist

Before connecting any AI to your client data, work through this list. Items are ordered by priority.

Update all client DPAs to include AI sub-processor authorization GDPR Article 28(2). Without this, any AI processing of client data is a contractual breach.
Send formal sub-processor notification to all clients Name the AI provider, purpose, data categories, safeguards. Give a 30-day objection window.
Verify AI provider’s DPA, certifications, and transfer mechanisms Check for SOC 2 Type II, DPF certification, SCCs. Ensure their DPA meets your client DPA requirements.
Conduct a Transfer Impact Assessment (EU/Benelux MSPs with US providers) GDPR Articles 44-49. Required if relying on SCCs. The CNIL published a comprehensive TIA guide.
Implement data sanitization/minimization layer Build a filter between your business systems and the AI. Strip PII, aggregate where possible, log all data sent.
Conduct a Data Protection Impact Assessment (DPIA) GDPR Article 35. May be required for large-scale automated processing of personal data. See ICO DPIA guidance.
Review cyber and E&O insurance for AI exclusions Check for AI exclusion endorsements (Verisk forms, effective January 2026). Disclose AI usage proactively.
Establish AI usage policies for staff No credentials in prompts. No raw client data in consumer AI tools. Document what’s allowed and what isn’t.
Ensure AI literacy training for staff EU AI Act Article 4 (in effect since February 2, 2025). Staff must understand the AI systems they use.
Maintain SCCs as backup to DPF for US AI providers If the ECJ invalidates the DPF (Schrems III), you need SCCs immediately. Keep them in place now.

How to notify your clients

Under GDPR’s general authorization model, you must formally notify clients before introducing a new sub-processor. Here’s a template you can adapt:

Subject: Notice of New Sub-Processor — AI-Assisted Service Management

Dear [Client Name],

As part of our continuous improvement of service delivery, we are
writing to inform you of our intention to engage [AI Provider Name]
as a sub-processor under our existing Data Processing Agreement
dated [Date].

PURPOSE: [AI Provider] will be used for [specific purpose, e.g.,
"automated ticket classification, reporting analytics, and service
trend analysis"] to improve response times and service quality.

DATA PROCESSED: The following categories may be processed:
- Aggregated service metrics and ticket statistics
- Device categories and configuration summaries
- Service performance indicators

We do NOT send the following to this sub-processor:
- Passwords or authentication credentials
- Financial data (banking, credit card information)
- Individual employee personal details

SAFEGUARDS IN PLACE:
- Data encrypted in transit (TLS 1.2+) and at rest
- [Provider] maintains SOC 2 Type II / ISO 27001 certification
- Data processed within [EU/under EU-US Data Privacy Framework
  with Standard Contractual Clauses]
- [Provider] does not use customer data for model training

YOUR RIGHT TO OBJECT: In accordance with Section [X] of our Data
Processing Agreement, you have 30 days from receipt of this notice
to object. Please direct objections in writing to [email].

Kind regards,
[MSP Name]
[Compliance Officer / DPO]

The EU AI Act: what MSPs need to know

The EU AI Act (Regulation 2024/1689) reaches full applicability on August 2, 2026. Even before then, some obligations already apply. The AI Act Explorer provides a navigable version of the full text.

Already in effect (since Feb 2, 2025)

AI literacy obligations (Article 4). MSPs must ensure staff understands the AI systems they deploy. This includes knowing what data flows where, what the limitations are, and when human oversight is needed.

Coming August 2, 2026

Full applicability. Most MSP AI use cases (ticket processing, reporting) won’t be “high-risk.” But AI used for workforce management or access control decisions could trigger high-risk obligations with additional documentation requirements.

Regardless of risk classification, transparency is required (Article 50): if clients interact with AI-generated content, they must be informed.


Key takeaways

Update your DPAs before you connect anything

Using AI without updated client agreements is a contractual breach, regardless of GDPR compliance. This is the single most important step.

EU data residency eliminates the hardest compliance questions

Azure OpenAI (EU Data Zone), AWS Bedrock (Frankfurt), or Google Vertex (EU) keep data in the EU. No international transfer, no Schrems III risk, no TIA required.

Check your insurance: AI exclusions are here

Verisk’s AI exclusion forms took effect January 2026. Don’t assume you’re covered. Disclose AI usage and document your governance.

Structured data is your compliance advantage

Because Proxuma Power BI structures your operational data into labeled measures and dimensions, AI can query aggregated metrics instead of raw records. Less personal data to the AI means less compliance burden for you.

Never use consumer AI products for client data

ChatGPT.com, Claude.ai, Gemini: these consumer interfaces may use your input for training. Always use commercial APIs with a signed DPA. Establish clear staff policies.

Sources & further reading

All claims in this guide are supported by the official sources below. Links verified February 2026.

EU & International Legislation

1

GDPR full text — gdpr-info.eu (interlinked, with recitals)

2

EU AI Act (Regulation 2024/1689) — EUR-Lex official publication

3

Standard Contractual Clauses — European Commission

4

EU Adequacy Decisions — European Commission

Data Protection Authorities

🇪🇺

European Data Protection Board (EDPB) — EU-level guidance and opinions

🇳🇱

Autoriteit Persoonsgegevens — Dutch Data Protection Authority

🇬🇧

Information Commissioner’s Office (ICO) — UK Data Protection Authority

🇫🇷

CNIL — French Data Protection Authority (TIA guidance)

🇦🇺

OAIC — Office of the Australian Information Commissioner

🇳🇿

Office of the Privacy Commissioner — New Zealand

EDPB Opinions & Guidelines Topic
Opinion 22/2024 Processor and sub-processor chains under Article 28
Opinion 28/2024 AI models and personal data — data minimization in AI
Guidelines 02/2023 Article 28 and the notion of processor
AI Provider DPA Privacy DPF Status
Microsoft (Azure OpenAI) DPA Data privacy Certified
OpenAI DPA Enterprise privacy Certified
Anthropic (Claude) DPA Privacy center
AWS (Bedrock) DPA Security & compliance Certified
Google (Vertex AI) DPA Data governance Certified

Additional resources:

EU-US Data Privacy Framework — US Department of Commerce
UK-US Data Bridge — UK Government
ICO AI Guidance — UK Information Commissioner’s Office
NIST AI — US National Institute of Standards and Technology
CNIL Transfer Impact Assessment Guide — French DPA
Court of Justice of the EU (CJEU) — Schrems I & II case law
California Consumer Privacy Act (CCPA) — CA Attorney General
PIPEDA — Canadian federal privacy law


Ready to explore AI-powered reporting?

Proxuma Power BI gives you the structured data layer that makes AI queries possible, with the labeling and organization that keeps your compliance burden low.

View AI-Powered Reports Get Started with Power BI

Generate this report from your own data

Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.

See more reports Get started