
Generative AI (Üretken Yapay Zekâ – ÜYZ) is rapidly transforming how organizations create content, automate tasks, and deliver digital services. But as these systems generate text, images, video, audio, and even deepfakes, they interact deeply with personal data, raising critical compliance questions under Turkey’s Law on the Protection of Personal Data (KVKK).
To address these questions, the Turkish Data Protection Authority (KVKK) released a new official document:
“Üretken Yapay Zekâ ve Kişisel Verilerin Korunması Rehberi (15 Soruda)”,
a detailed 63-page guide that outlines how generative AI systems should be used, governed, and assessed from a data protection perspective.
This blog post provides a full, structured English summary of the guide—explaining KVKK’s expectations, key definitions, obligations for organizations, risks, transparency requirements, and recommended safeguards.
KVKK begins with a foundational definitions section , describing major concepts needed to understand generative AI:
Models trained on massive text datasets that learn patterns between characters, words, and sentences to perform tasks like text generation, summarization, and Q&A. (Page 4)
Systems that analyze high volumes of personal data to derive correlations or support decision-making.
Large-scale datasets requiring special technologies and techniques to extract meaningful insights.
AI designed to perform specific tasks at high proficiency.
The guide also explains how generative AI is used across different media types:
Producing summaries, articles, Q&A content, creative writing, and conversational responses.
Creating images, animations, concept art, or videos based on text prompts, often using large visual datasets.
Producing synthetic voices, sound effects, or music compositions.
A highlighted concern in the guide—especially for misuse scenarios and child protection implications.
The guide emphasizes that generative AI systems can process personal data even if users do not explicitly input it. Key risks include:
KVKK applies whenever:
This includes:
Even if an organization is not training models, but only using AI tools, KVKK obligations still apply if personal data is involved.
The guide aligns with the standard KVKK principles but highlights new interpretations for AI systems:
AI users and providers must inform individuals about data processing in clear terms.
Data collected for training must be used for clearly defined, legal purposes.
Only data strictly necessary to train or operate the model can be processed.
Generative systems can produce incorrect or misleading outputs—organizations must mitigate this.
Data retention limits apply not only to user data but also to logs, model training sets, and embeddings.
Organizations must implement technical and administrative safeguards comparable to those described in KVKK’s “Technical and Administrative Measures Guide”.
One of the most explicit and detailed sections is the transparency requirement, explained on page 49 of the guide. KVKK emphasizes that transparency is essential so individuals can control their data.
KVKK restates that the full Article 10 Aydınlatma text applies to AI, and organizations must disclose:
The guide specifically requires separate and explicit transparency notices for:
This separation is critical because the legal bases, purposes, and risks differ.
The guide does not create new legal bases but clarifies how existing ones apply:
AI tools often transfer data internationally due to:
KVKK explicitly ties this to the Cross-Border Transfer Guide (2025) referenced in the appendix.
Organizations must comply with:
The guide highlights several risk categories:
Incorrect information may harm individuals or misrepresent them.
Training data may embed historical or societal biases.
Particularly dangerous for children, harassment, and identity manipulation.
Prompt injection, data leakage, or adversarial inputs.
Including scraping of large datasets without a legal basis.
KVKK expects organizations to follow the standard administrative and technical measures—similar to those described in other KVKK guides (2025 editions referenced in the annex). Key TOMs include:
Section 14 offers advice for everyday users:
(Referenced via page listing)
Section 15 provides guidelines for protecting children from misuse of AI tools:
To comply with KVKK, organizations should establish internal AI governance structures, including:
This new 15-question KVKK guide is one of the most comprehensive official resources on generative AI released by any data protection authority globally. It aligns with international standards (EDPB, EDPS, ICO, UNESCO), while adding Turkey-specific obligations around transparency, transfer rules, explicit consent, and risk management.
For organizations building, training, or integrating generative AI into products or workflows, this guide is now an essential compliance reference.
source: https://www.kvkk.gov.tr/