Compliance ยท Guide

GDPR-compliant AI agents โ€” what it actually takes

GDPR applies to every AI agent platform that processes personal data of EU residents, regardless of where the vendor is headquartered. Compliance means more than a "we comply" badge: it requires a lawful basis per processing activity, data-subject rights implemented at the data layer, a signed DPA between customer and vendor, tenant-isolated storage, transparent model-training policy, and audit logs that survive regulatory review. For AI agents specifically, Article 22 (automated decision-making) adds explicit requirements: the customer โ€” not the vendor โ€” must determine when human review is required.

GDPR restated for AI agent platforms

The six core principles from GDPR Article 5, translated to what they mean for an AI-agent vendor.

Lawfulness, fairness, transparency

Every processing activity needs a lawful basis (consent, contract, legitimate interest, etc.) logged per action. Customers get transparency about how agents reason and decide.

Purpose limitation

Data collected for one agent action cannot be repurposed for another without new legal basis. No "we use your data to improve our product" clauses.

Data minimization

Agents access only the fields they need for the specific task. OAuth 2.0 minimum scopes enforce this at the integration layer.

Accuracy

Outputs are auditable; incorrect agent outputs can be corrected or erased. The audit log tracks each version of each record.

Storage limitation

Customer-configurable retention per data category. Agent conversation transcripts, call recordings and CRM events all have configurable TTLs.

Integrity and confidentiality

Encryption at rest (AES-256) and in transit (TLS 1.3). Tenant isolation cryptographic, not just logical. SOC 2 Type I audit target Q3 2026.

10 technical implementations that prove GDPR compliance

A vendor can claim compliance. These are the implementations that demonstrate it when an auditor shows up.

Art. 6/7 ยท Lawful basis logged

Per-action lawful basis

Every agent action records its lawful basis (consent, contract, legitimate interest, legal obligation). Exportable per data subject on request.

Art. 15 ยท Right of access

Subject-access export

One-click export of everything the platform holds about a named data subject โ€” CRM, emails, call transcripts, agent-generated outputs.

Art. 17 ยท Right to erasure

Tenant-wide erasure

Erasure propagates across every agent's cache and the shared data graph within 30 days. Not "soft delete" โ€” actual removal with proof-of-deletion receipt.

Art. 20 ยท Data portability

Structured export

Machine-readable export (JSON, CSV) of all customer data, not just personal data. Re-importable into competitor platforms via standard schemas.

Art. 22 ยท Automated decision-making

Human-review gate

High-risk decisions (payments above threshold, account blocks, high-value contract clauses) require human approval by default. Configurable per customer policy.

Art. 25 ยท Privacy by design

Tenant isolation

Cryptographic tenant isolation from database layer up. No shared vector stores, no shared prompt caches across customers.

Art. 28 ยท Processor obligations

DPA with sub-processor list

Signed DPA between customer and vendor. Sub-processor list (including LLM providers) published and version-controlled. Customers notified of sub-processor changes with objection period.

Art. 30 ยท Processing records

Audit log

Immutable audit log of every agent action: timestamp, agent identity, input, output, confidence, lawful basis, data-subject identifier. Exportable for supervisory authority.

Art. 32 ยท Security measures

Encryption + access control

AES-256 at rest, TLS 1.3 in transit. RBAC for human operators. Agents authenticated per-request. Secret rotation on schedule.

Art. 33/34 ยท Breach notification

Breach pipeline

Detection โ†’ triage โ†’ customer notification within 72 hours of becoming aware. Scripted and rehearsed quarterly.

Common GDPR risks with AI agents โ€” and how to mitigate

The failure modes auditors look for. Every risk has a concrete mitigation pattern.

Risk

LLM providers use your prompts to train their models.

Mitigation

Only use vendors with signed no-training-on-customer-content clauses. Verify this in the DPA, not just the marketing page.

Risk

Conversation transcripts leak across tenants via a shared vector store.

Mitigation

Require tenant-isolated embeddings. Ask the vendor to show the database layout โ€” shared collections are a red flag.

Risk

Agent logs contain personal data retained beyond necessity.

Mitigation

Customer-configurable log retention per data category. Default 90 days for non-regulated data, 10 years only for GoBD/tax-relevant records.

Risk

Phone-call recordings stored without explicit consent.

Mitigation

Call agent discloses recording at call start. Consent logged to audit trail with timestamp and caller identifier. One-click redaction on request.

Risk

Sub-processors outside the EU process personal data.

Mitigation

EU-only data residency option. If US sub-processors are used, DPF (Data Privacy Framework) or SCCs with verifiable transfer-impact assessment.

Risk

Agent decisions cannot be contested because the reasoning is opaque.

Mitigation

Every agent decision logs its reasoning trace. Data subjects get the reasoning (in plain language) on Article 15 request.

Risk

A forgotten integration still has OAuth tokens after you cancel.

Mitigation

Token revocation on cancellation is automatic. Customer can also revoke at the source (Google / Microsoft account settings) in one click.

What to look for in the vendor DPA

Request the DPA template before the purchase, not during onboarding. These are the clauses that matter most.

  • โœ“Explicit no-training clause: customer content is never used to train models, with no opt-out fine print
  • โœ“Sub-processor list attached, versioned, and notification period before changes (30 days minimum)
  • โœ“Data-subject request handling timelines: Article 15 access within 30 days, Article 17 erasure within 30 days including downstream processor cascade
  • โœ“Audit rights: customer can audit vendor's security controls annually or more frequently for cause
  • โœ“Breach notification: 48 hours between vendor awareness and customer notification (stricter than GDPR's 72-hour authority notification)
  • โœ“Data residency commitment: where data is stored, processed, backed up โ€” in writing, not "typically EU"
  • โœ“Return and deletion on termination: data returned in portable format within 30 days; deletion receipt issued within 60 days
  • โœ“Liability cap clarity: specifically for data-protection breaches, separate from general liability cap
  • โœ“Applicable law and jurisdiction: ideally customer's EU member state, not vendor's home jurisdiction

GDPR checklist for AI agent buyers โ€” 12 points

Print this. Send it to the vendor before the contract. If any answer is "we're working on it", delay the purchase.

01

Data Processing Agreement

Does the vendor sign a GDPR-compliant DPA as standard, not negotiated per customer?

02

No training by contract

Is "we never train on customer content" in the DPA itself, not just the website?

03

Tenant isolation

Is tenant isolation cryptographic โ€” separate encryption keys per customer โ€” or just logical partitioning?

04

Sub-processor transparency

Is there a public sub-processor list with version history? Is LLM provider listed?

05

Data residency

Where is customer data stored, processed, and backed up? Is EU-only an option?

06

Right-to-access export

Can you trigger a full Article 15 export from the admin UI, or does it require a support ticket?

07

Right-to-erasure cascade

Does erasure propagate to all agent caches, vector stores, and sub-processors within 30 days?

08

Article 22 human review

Can you configure which agent decisions require human approval? Default to approval for destructive actions.

09

Audit log export

Is the audit log exportable in a format usable for supervisory-authority inquiries (JSON, CSV)?

10

Breach notification timing

Does the vendor commit to notifying you within 48โ€“72 hours of becoming aware of a breach?

11

Phone-call consent

If voice agents are used, does the platform handle jurisdiction-specific consent rules (two-party in DE/AT/CH)?

12

Exit plan

If you cancel, is there a documented return-and-delete process with a deletion receipt?

Frequently asked questions

Is an AI agent platform a controller or a processor under GDPR?+

In almost all SMB scenarios, the customer is the controller and the AI agent platform is the processor. The platform operates on the customer's instructions, processes data in the customer's tenant, and does not determine purposes of processing.

Does Article 22 (automated decision-making) ban AI agents?+

No. Article 22 requires that data subjects have the right to human review for decisions with legal or similarly significant effects. AI agent platforms comply by offering configurable human-approval gates per decision type. Customers (controllers) configure which decisions route through human review.

Can I use a US-based AI agent platform under GDPR?+

Yes, if the vendor operates under the EU-US Data Privacy Framework or uses Standard Contractual Clauses with a valid Transfer Impact Assessment. Ask for the DPF certification or SCC documentation before signing. Preference for EU-headquartered vendors simplifies this.

Do AI agents fall under the EU AI Act on top of GDPR?+

Yes. The EU AI Act adds transparency, risk-classification and human-oversight requirements on top of GDPR. Most business AI agents fall into "limited risk" (requiring transparency about AI usage), not "high risk". Voice-facing agents also fall under transparency rules requiring disclosure that the user is talking to AI.

What is the biggest GDPR risk-factor for AI agent buyers?+

Training clauses. Many vendors default to opt-out-per-feature for model improvement, which is practically unenforceable for SMBs with many users. Insist on contractual no-training by default in the DPA.

Are call recordings under GDPR always require consent?+

In Austria, Germany and Switzerland: yes, both parties must consent before recording. In other EU member states, consent rules vary. A compliant voice-agent platform discloses recording at call start and logs consent to the audit trail โ€” making it work in all jurisdictions.

How long must I retain agent audit logs?+

For non-regulated business data: customer choice, default 90 days. For tax-relevant actions (invoices, payments): 10 years under GoBD (DE) / RGS (AT). The compliance agent separates the two so you do not over-retain non-tax data.

Get the DPA before you buy

Book a 30-minute session with our compliance team. We walk through the full DPA, the sub-processor list and answer every one of the 12 checklist questions on the record.

GDPR-compliant AI agents โ€” the full buyer checklist | DivineMind.AI | DivineMind.AI